我有一个熊猫数据框架,我想把它分为3个单独的集。我知道使用sklearn中的train_test_split。交叉验证,可以将数据分为两组(训练和测试)。然而,我无法找到将数据分成三组的任何解决方案。最好是有原始数据的下标。

我知道一个解决办法是使用train_test_split两次,并以某种方式调整索引。但是是否有一种更标准/内置的方法将数据分成3组而不是2组?


当前回答

使用train_test_split非常方便,不需要在划分到几个集后执行重新索引,也不需要编写一些额外的代码。上面的最佳答案没有提到使用train_test_split分隔两次而不改变分区大小将不会给出最初预期的分区:

x_train, x_remain = train_test_split(x, test_size=(val_size + test_size))

那么x_remain中的验证集和测试集的部分就会发生变化,可以算作

new_test_size = np.around(test_size / (val_size + test_size), 2)
# To preserve (new_test_size + new_val_size) = 1.0 
new_val_size = 1.0 - new_test_size

x_val, x_test = train_test_split(x_remain, test_size=new_test_size)

在这种情况下,将保存所有初始分区。

其他回答

将数据集分割为训练集和测试集,如在其他答案中一样,使用

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

然后,如果您适合您的模型,您可以添加validation_split作为参数。这样就不需要提前创建验证集。例如:

from tensorflow.keras import Model

model = Model(input_layer, out)

[...]

history = model.fit(x=X_train, y=y_train, [...], validation_split = 0.3)

验证集旨在作为训练集训练期间的代表运行测试集,完全来自训练集,无论是通过k-fold交叉验证(推荐)还是通过validation_split;然后,您不需要单独创建一个验证集,仍然可以将数据集分为您所要求的三个集。

然而,将数据集分为train、test、cv(0.6、0.2、0.2)的一种方法是使用train_test_split方法两次。

from sklearn.model_selection import train_test_split

x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8)
x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75)

注意:

函数被编写来处理随机集创建的播种。你不应该依赖集分割,它不会随机化集合。

import numpy as np
import pandas as pd

def train_validate_test_split(df, train_percent=.6, validate_percent=.2, seed=None):
    np.random.seed(seed)
    perm = np.random.permutation(df.index)
    m = len(df.index)
    train_end = int(train_percent * m)
    validate_end = int(validate_percent * m) + train_end
    train = df.iloc[perm[:train_end]]
    validate = df.iloc[perm[train_end:validate_end]]
    test = df.iloc[perm[validate_end:]]
    return train, validate, test

示范

np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE'))
df

train, validate, test = train_validate_test_split(df)

train

validate

test

回答任意数量的子集:

def _separate_dataset(patches, label_patches, percentage, shuffle: bool = True):
    """
    :param patches: data patches
    :param label_patches: label patches
    :param percentage: list of percentages for each value, example [0.9, 0.02, 0.08] to get 90% train, 2% val and 8% test.
    :param shuffle: Shuffle dataset before split.
    :return: tuple of two lists of size = len(percentage), one with data x and other with labels y.
    """
    x_test = patches
    y_test = label_patches
    percentage = list(percentage)       # need it to be mutable
    assert sum(percentage) == 1., f"percentage must add to 1, but it adds to sum{percentage} = {sum(percentage)}"
    x = []
    y = []
    for i, per in enumerate(percentage[:-1]):
        x_train, x_test, y_train, y_test = train_test_split(x_test, y_test, test_size=1-per, shuffle=shuffle)
        percentage[i+1:] = [value / (1-percentage[i]) for value in percentage[i+1:]]
        x.append(x_train)
        y.append(y_train)
    x.append(x_test)
    y.append(y_test)
    return x, y

这适用于任何比例。在本例中,您应该执行percentage = [train_percentage, val_percentage, test_percentage]。

使用train_test_split非常方便,不需要在划分到几个集后执行重新索引,也不需要编写一些额外的代码。上面的最佳答案没有提到使用train_test_split分隔两次而不改变分区大小将不会给出最初预期的分区:

x_train, x_remain = train_test_split(x, test_size=(val_size + test_size))

那么x_remain中的验证集和测试集的部分就会发生变化,可以算作

new_test_size = np.around(test_size / (val_size + test_size), 2)
# To preserve (new_test_size + new_val_size) = 1.0 
new_val_size = 1.0 - new_test_size

x_val, x_test = train_test_split(x_remain, test_size=new_test_size)

在这种情况下,将保存所有初始分区。