我有一个Python脚本,它把一个整数列表作为输入,我需要一次处理四个整数。不幸的是,我无法控制输入,否则我将它作为一个四元素元组列表传入。目前,我以这种方式迭代它:
for i in range(0, len(ints), 4):
# dummy op for example code
foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]
不过,它看起来很像“C-think”,这让我怀疑有一种更python的方式来处理这种情况。该列表在迭代后被丢弃,因此不需要保留。也许这样会更好?
while ints:
foo += ints[0] * ints[1] + ints[2] * ints[3]
ints[0:4] = []
不过,感觉还是不太对。: - /
相关问题:在Python中如何将列表分割成大小均匀的块?
还有另一个答案,它的优点是:
1)容易理解
2)适用于任何可迭代对象,而不仅仅是序列(上面的一些答案会阻塞文件句柄)
3)不立即将数据块加载到内存
4)不会在内存中生成对同一迭代器的块长的引用列表
5)在列表的末尾没有填充填充值
话虽如此,我还没有计算它的时间,所以它可能比一些更聪明的方法慢,而且考虑到用例,一些优势可能是无关紧要的。
def chunkiter(iterable, size):
def inneriter(first, iterator, size):
yield first
for _ in xrange(size - 1):
yield iterator.next()
it = iter(iterable)
while True:
yield inneriter(it.next(), it, size)
In [2]: i = chunkiter('abcdefgh', 3)
In [3]: for ii in i:
for c in ii:
print c,
print ''
...:
a b c
d e f
g h
Update:
A couple of drawbacks due to the fact the inner and outer loops are pulling values from the same iterator:
1) continue doesn't work as expected in the outer loop - it just continues on to the next item rather than skipping a chunk. However, this doesn't seem like a problem as there's nothing to test in the outer loop.
2) break doesn't work as expected in the inner loop - control will wind up in the inner loop again with the next item in the iterator. To skip whole chunks, either wrap the inner iterator (ii above) in a tuple, e.g. for c in tuple(ii), or set a flag and exhaust the iterator.
下面是一个支持生成器的无导入chunker:
def chunks(seq, size):
it = iter(seq)
while True:
ret = tuple(next(it) for _ in range(size))
if len(ret) == size:
yield ret
else:
raise StopIteration()
使用示例:
>>> def foo():
... i = 0
... while True:
... i += 1
... yield i
...
>>> c = chunks(foo(), 3)
>>> c.next()
(1, 2, 3)
>>> c.next()
(4, 5, 6)
>>> list(chunks('abcdefg', 2))
[('a', 'b'), ('c', 'd'), ('e', 'f')]
关于J.F. Sebastian给出的解决方案:
def chunker(iterable, chunksize):
return zip(*[iter(iterable)]*chunksize)
它很聪明,但有一个缺点——总是返回元组。如何获得字符串代替?
当然,你可以写“.join(chunker(…))”,但无论如何都要构造临时元组。
你可以通过编写自己的zip来摆脱临时元组,就像这样:
class IteratorExhausted(Exception):
pass
def translate_StopIteration(iterable, to=IteratorExhausted):
for i in iterable:
yield i
raise to # StopIteration would get ignored because this is generator,
# but custom exception can leave the generator.
def custom_zip(*iterables, reductor=tuple):
iterators = tuple(map(translate_StopIteration, iterables))
while True:
try:
yield reductor(next(i) for i in iterators)
except IteratorExhausted: # when any of iterators get exhausted.
break
Then
def chunker(data, size, reductor=tuple):
return custom_zip(*[iter(data)]*size, reductor=reductor)
使用示例:
>>> for i in chunker('12345', 2):
... print(repr(i))
...
('1', '2')
('3', '4')
>>> for i in chunker('12345', 2, ''.join):
... print(repr(i))
...
'12'
'34'
如果列表很大,执行效率最高的方法是使用生成器:
def get_chunk(iterable, chunk_size):
result = []
for item in iterable:
result.append(item)
if len(result) == chunk_size:
yield tuple(result)
result = []
if len(result) > 0:
yield tuple(result)
for x in get_chunk([1,2,3,4,5,6,7,8,9,10], 3):
print x
(1, 2, 3)
(4, 5, 6)
(7, 8, 9)
(10,)