我有一个Python脚本,它把一个整数列表作为输入,我需要一次处理四个整数。不幸的是,我无法控制输入,否则我将它作为一个四元素元组列表传入。目前,我以这种方式迭代它:

for i in range(0, len(ints), 4):
    # dummy op for example code
    foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]

不过,它看起来很像“C-think”,这让我怀疑有一种更python的方式来处理这种情况。该列表在迭代后被丢弃,因此不需要保留。也许这样会更好?

while ints:
    foo += ints[0] * ints[1] + ints[2] * ints[3]
    ints[0:4] = []

不过,感觉还是不太对。: - /

相关问题:在Python中如何将列表分割成大小均匀的块?


当前回答

def chunker(iterable, n):
    """Yield iterable in chunk sizes.

    >>> chunks = chunker('ABCDEF', n=4)
    >>> chunks.next()
    ['A', 'B', 'C', 'D']
    >>> chunks.next()
    ['E', 'F']
    """
    it = iter(iterable)
    while True:
        chunk = []
        for i in range(n):
            try:
                chunk.append(next(it))
            except StopIteration:
                yield chunk
                raise StopIteration
        yield chunk

if __name__ == '__main__':
    import doctest

    doctest.testmod()

其他回答

还有另一个答案,它的优点是:

1)容易理解 2)适用于任何可迭代对象,而不仅仅是序列(上面的一些答案会阻塞文件句柄) 3)不立即将数据块加载到内存 4)不会在内存中生成对同一迭代器的块长的引用列表 5)在列表的末尾没有填充填充值

话虽如此,我还没有计算它的时间,所以它可能比一些更聪明的方法慢,而且考虑到用例,一些优势可能是无关紧要的。

def chunkiter(iterable, size):
  def inneriter(first, iterator, size):
    yield first
    for _ in xrange(size - 1): 
      yield iterator.next()
  it = iter(iterable)
  while True:
    yield inneriter(it.next(), it, size)

In [2]: i = chunkiter('abcdefgh', 3)
In [3]: for ii in i:                                                
          for c in ii:
            print c,
          print ''
        ...:     
        a b c 
        d e f 
        g h 

Update: A couple of drawbacks due to the fact the inner and outer loops are pulling values from the same iterator: 1) continue doesn't work as expected in the outer loop - it just continues on to the next item rather than skipping a chunk. However, this doesn't seem like a problem as there's nothing to test in the outer loop. 2) break doesn't work as expected in the inner loop - control will wind up in the inner loop again with the next item in the iterator. To skip whole chunks, either wrap the inner iterator (ii above) in a tuple, e.g. for c in tuple(ii), or set a flag and exhaust the iterator.

类似于其他提案,但不完全相同,我喜欢这样做,因为它简单易读:

it = iter([1, 2, 3, 4, 5, 6, 7, 8, 9])
for chunk in zip(it, it, it, it):
    print chunk

>>> (1, 2, 3, 4)
>>> (5, 6, 7, 8)

这样你就不会得到最后一部分。如果你想获取(9,None, None, None)作为最后一个块,只需使用itertools中的izip_longest。

下面是我的go works on lists,iter和range…懒洋洋地:

def chunker(it,size):
    rv = [] 
    for i,el in enumerate(it,1) :   
        rv.append(el)
        if i % size == 0 : 
            yield rv
            rv = []
    if rv : yield rv        

几乎变成了一句俏皮话;(

In [95]: list(chunker(range(9),2) )                                                                                                                                          
Out[95]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [96]: list(chunker([1,2,3,4,5],2) )                                                                                                                                       
Out[96]: [[1, 2], [3, 4], [5]]

In [97]: list(chunker(iter(range(9)),2) )                                                                                                                                    
Out[97]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [98]: list(chunker(range(9),25) )                                                                                                                                         
Out[98]: [[0, 1, 2, 3, 4, 5, 6, 7, 8]]

In [99]: list(chunker(range(9),1) )                                                                                                                                          
Out[99]: [[0], [1], [2], [3], [4], [5], [6], [7], [8]]

In [101]: %timeit list(chunker(range(101),2) )                                                                                                                               
11.3 µs ± 68.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

除非我遗漏了一些内容,否则没有提到以下使用生成器表达式的简单解决方案。它假设块的大小和数量都是已知的(通常情况下),并且不需要填充:

def chunks(it, n, m):
    """Make an iterator over m first chunks of size n.
    """
    it = iter(it)
    # Chunks are presented as tuples.
    return (tuple(next(it) for _ in range(n)) for _ in range(m))

这个问题的理想解决方案是使用迭代器(而不仅仅是序列)。它还应该是快速的。

这是itertools文档提供的解决方案:

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)

在我的mac book air上使用ipython的%timeit,我每次循环得到47.5 us。

然而,这真的不适合我,因为结果被填充为偶数大小的组。没有填充的解决方案稍微复杂一些。最天真的解决方案可能是:

def grouper(size, iterable):
    i = iter(iterable)
    while True:
        out = []
        try:
            for _ in range(size):
                out.append(i.next())
        except StopIteration:
            yield out
            break
        
        yield out

简单,但相当慢:每循环693个

我能想到的最好的解决方案是使用islice进行内循环:

def grouper(size, iterable):
    it = iter(iterable)
    while True:
        group = tuple(itertools.islice(it, None, size))
        if not group:
            break
        yield group

对于同样的数据集,我每循环得到305 us。

由于无法更快地得到一个纯粹的解决方案,我提供了以下解决方案,但有一个重要的警告:如果您的输入数据中有filldata的实例,则可能会得到错误的答案。

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    # itertools.zip_longest on Python 3
    for x in itertools.izip_longest(*args, fillvalue=fillvalue):
        if x[-1] is fillvalue:
            yield tuple(v for v in x if v is not fillvalue)
        else:
            yield x

我真的不喜欢这个答案,但它明显更快。每回路124 us