我有一个数据框架形式的相当大的数据集,我想知道我如何能够将数据框架分成两个随机样本(80%和20%)进行训练和测试。

谢谢!


当前回答

你也可以考虑分层划分为训练集和测试集。设定划分也随机生成训练集和测试集,但保留了原始的类比例。这使得训练集和测试集更好地反映原始数据集的属性。

import numpy as np  

def get_train_test_inds(y,train_proportion=0.7):
    '''Generates indices, making random stratified split into training set and testing sets
    with proportions train_proportion and (1-train_proportion) of initial sample.
    y is any iterable indicating classes of each observation in the sample.
    Initial proportions of classes inside training and 
    testing sets are preserved (stratified sampling).
    '''

    y=np.array(y)
    train_inds = np.zeros(len(y),dtype=bool)
    test_inds = np.zeros(len(y),dtype=bool)
    values = np.unique(y)
    for value in values:
        value_inds = np.nonzero(y==value)[0]
        np.random.shuffle(value_inds)
        n = int(train_proportion*len(value_inds))

        train_inds[value_inds[:n]]=True
        test_inds[value_inds[n:]]=True

    return train_inds,test_inds

df[train_inds]和df[test_inds]为您提供原始DataFrame df的训练和测试集。

其他回答

Scikit Learn的train_test_split就是一个很好的例子。它将拆分numpy数组和数据框架。

from sklearn.model_selection import train_test_split

train, test = train_test_split(df, test_size=0.2)

要分成两个以上的类,如训练、测试和验证,可以这样做:

probs = np.random.rand(len(df))
training_mask = probs < 0.7
test_mask = (probs>=0.7) & (probs < 0.85)
validatoin_mask = probs >= 0.85


df_training = df[training_mask]
df_test = df[test_mask]
df_validation = df[validatoin_mask]

这将把大约70%的数据用于训练,15%用于测试,15%用于验证。

在我的例子中,我想用特定的数字分割训练、测试和开发中的数据帧。我在这里分享我的解决方案

首先,为数据帧分配一个唯一的id(如果已经不存在的话)

import uuid
df['id'] = [uuid.uuid4() for i in range(len(df))]

以下是我的分割数字:

train = 120765
test  = 4134
dev   = 2816

分裂函数

def df_split(df, n):
    
    first  = df.sample(n)
    second = df[~df.id.isin(list(first['id']))]
    first.reset_index(drop=True, inplace = True)
    second.reset_index(drop=True, inplace = True)
    return first, second

现在分成培训,测试,开发

train, test = df_split(df, 120765)
test, dev   = df_split(test, 4134)

不需要转换为numpy。只要用pandas df来做拆分,它就会返回一个pandas df。

from sklearn.model_selection import train_test_split

train, test = train_test_split(df, test_size=0.2)

如果你想把x和y分开

X_train, X_test, y_train, y_test = train_test_split(df[list_of_x_cols], df[y_col],test_size=0.2)

如果要分割整个df

X, y = df[list_of_x_cols], df[y_col]

我会使用numpy的randn:

In [11]: df = pd.DataFrame(np.random.randn(100, 2))

In [12]: msk = np.random.rand(len(df)) < 0.8

In [13]: train = df[msk]

In [14]: test = df[~msk]

为了证明这是有效的:

In [15]: len(test)
Out[15]: 21

In [16]: len(train)
Out[16]: 79