我有一个熊猫数据框架,df_test。它包含一个列'size',以字节为单位表示大小。我已经计算了KB, MB和GB使用以下代码:

df_test = pd.DataFrame([
    {'dir': '/Users/uname1', 'size': 994933},
    {'dir': '/Users/uname2', 'size': 109338711},
])

df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0, grouping=True) + ' KB')
df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 2, grouping=True) + ' MB')
df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 3, grouping=True) + ' GB')

df_test


             dir       size       size_kb   size_mb size_gb
0  /Users/uname1     994933      971.6 KB    0.9 MB  0.0 GB
1  /Users/uname2  109338711  106,776.1 KB  104.3 MB  0.1 GB

[2 rows x 5 columns]

我已经运行了超过120,000行,根据%timeit,每列大约需要2.97秒* 3 = ~9秒。

有什么办法能让它快点吗?例如,我可以从apply中一次返回一列并运行3次,我可以一次返回所有三列以插入到原始的数据框架中吗?

我发现的其他问题都希望接受多个值并返回一个值。我想取一个值并返回多个列。


当前回答

使用apply和zip将比Series方式快3倍。

def sizes(s):    
    return locale.format("%.1f", s / 1024.0, grouping=True) + ' KB', \
        locale.format("%.1f", s / 1024.0 ** 2, grouping=True) + ' MB', \
        locale.format("%.1f", s / 1024.0 ** 3, grouping=True) + ' GB'
df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes))

测试结果如下:

Separate df.apply(): 

    100 loops, best of 3: 1.43 ms per loop

Return Series: 

    100 loops, best of 3: 2.61 ms per loop

Return tuple:

    1000 loops, best of 3: 819 µs per loop

其他回答

我想在groupby上使用apply。我试着用你建议的方法。它确实对我有帮助,但不是全部。

添加result_type='expand'没有工作(因为我在系列上使用apply而不是DataFrame?)和zip(*___),我失去了索引。

如果其他人也有同样的问题,下面是我(最终)解决它的方法:

dfg = df.groupby(by=['Column1','Column2']).Column3.apply(myfunc)
dfres = pd.DataFrame()
dfres['a'], dfres['b'], dfres['c'] = (dfg.apply(lambda x: x[0]), dfg.apply(lambda x: x[1]), dfg.apply(lambda x: x[2])) 

或者你知道更好的办法。告诉我。

如果这超出了我们讨论的范围,请告诉我。

通常,为了返回多个值,我就是这样做的

def gimmeMultiple(group):
    x1 = 1
    x2 = 2
    return array([[1, 2]])
def gimmeMultipleDf(group):
    x1 = 1
    x2 = 2
    return pd.DataFrame(array([[1,2]]), columns=['x1', 'x2'])
df['size'].astype(int).apply(gimmeMultiple)
df['size'].astype(int).apply(gimmeMultipleDf)

返回一个数据帧肯定有它的好处,但有时不是必需的。您可以查看apply()返回的内容,并对函数进行一些操作;)

You can go 40+ times faster than the top answers here if you do your math in numpy instead. Adapting @Rocky K's top two answers. The main difference is running on an actual df of 120k rows. Numpy is way faster at math when you apply your functions array-wise (instead of applying a function value-wise). The best answer is by far the third one because it uses numpy for the math. Also notice that it only calculates 1024**2 and 1024**3 once each instead of once for each row, saving 240k calculations. Here are the timings on my machine:

Tuples (pass value, return tuple then zip, new columns dont exist):
Runtime: 10.935037851333618 

Tuples (pass value, return tuple then zip, new columns exist):
Runtime: 11.120025157928467 

Use numpy for math portions:
Runtime: 0.24799370765686035

以下是我用来计算这些时间的脚本(改编自Rocky K):

import numpy as np
import pandas as pd
import locale
import time

size = np.random.random(120000) * 1000000000
data = pd.DataFrame({'Size': size})

def sizes_pass_value_return_tuple(value):
    a = locale.format_string("%.1f", value / 1024.0, grouping=True) + ' KB'
    b = locale.format_string("%.1f", value / 1024.0 ** 2, grouping=True) + ' MB'
    c = locale.format_string("%.1f", value / 1024.0 ** 3, grouping=True) + ' GB'
    return a, b, c

print('\nTuples (pass value, return tuple then zip, new columns dont exist):')
df1 = data.copy()
start = time.time()
df1['size_kb'],  df1['size_mb'], df1['size_gb'] = zip(*df1['Size'].apply(sizes_pass_value_return_tuple))
end = time.time()
print('Runtime:', end - start, '\n')

print('Tuples (pass value, return tuple then zip, new columns exist):')
df2 = data.copy()
start = time.time()
df2 = pd.concat([df2, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
df2['size_kb'],  df2['size_mb'], df2['size_gb'] = zip(*df2['Size'].apply(sizes_pass_value_return_tuple))
end = time.time()
print('Runtime:', end - start, '\n')

print('Use numpy for math portions:')
df3 = data.copy()
start = time.time()
df3['size_kb'] = (df3.Size.values / 1024).round(1)
df3['size_kb'] = df3.size_kb.astype(str) + ' KB'
df3['size_mb'] = (df3.Size.values / 1024 ** 2).round(1)
df3['size_mb'] = df3.size_mb.astype(str) + ' MB'
df3['size_gb'] = (df3.Size.values / 1024 ** 3).round(1)
df3['size_gb'] = df3.size_gb.astype(str) + ' GB'
end = time.time()
print('Runtime:', end - start, '\n')

您可以从包含新数据的应用函数返回一个Series,从而避免需要迭代三次。将axis=1传递给apply函数,将函数的大小应用到数据框架的每一行,返回一个要添加到新数据框架的序列。这个序列s包含新的值,以及原始数据。

def sizes(s):
    s['size_kb'] = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
    s['size_mb'] = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    s['size_gb'] = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return s

df_test = df_test.append(rows_list)
df_test = df_test.apply(sizes, axis=1)

使用apply和zip将比Series方式快3倍。

def sizes(s):    
    return locale.format("%.1f", s / 1024.0, grouping=True) + ' KB', \
        locale.format("%.1f", s / 1024.0 ** 2, grouping=True) + ' MB', \
        locale.format("%.1f", s / 1024.0 ** 3, grouping=True) + ' GB'
df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes))

测试结果如下:

Separate df.apply(): 

    100 loops, best of 3: 1.43 ms per loop

Return Series: 

    100 loops, best of 3: 2.61 ms per loop

Return tuple:

    1000 loops, best of 3: 819 µs per loop