我使用过hashlib(它取代了Python 2.6/3.0中的md5),如果我打开一个文件并将其内容放在hashlib.md5()函数中,它工作得很好。
问题是对于非常大的文件,它们的大小可能超过RAM大小。
如何在不将整个文件加载到内存的情况下获得文件的MD5哈希值?
我使用过hashlib(它取代了Python 2.6/3.0中的md5),如果我打开一个文件并将其内容放在hashlib.md5()函数中,它工作得很好。
问题是对于非常大的文件,它们的大小可能超过RAM大小。
如何在不将整个文件加载到内存的情况下获得文件的MD5哈希值?
当前回答
以下是我对Piotr Czapla方法的看法:
def md5sum(filename):
md5 = hashlib.md5()
with open(filename, 'rb') as f:
for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
md5.update(chunk)
return md5.hexdigest()
其他回答
以下是我对Piotr Czapla方法的看法:
def md5sum(filename):
md5 = hashlib.md5()
with open(filename, 'rb') as f:
for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
md5.update(chunk)
return md5.hexdigest()
Python 2/3可移植解决方案
为了计算校验和(md5, sha1等),你必须以二进制模式打开文件,因为你会对字节值进行求和:
要使Python 2.7和Python 3可移植,你应该使用io包,像这样:
import hashlib
import io
def md5sum(src):
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
content = fd.read()
md5.update(content)
return md5
如果你的文件很大,你可能更喜欢按块读取文件,以避免将整个文件内容存储在内存中:
def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
for chunk in iter(lambda: fd.read(length), b''):
md5.update(chunk)
return md5
这里的技巧是使用iter()函数和一个哨兵(空字符串)。
在这种情况下创建的迭代器将在每次调用next()方法时不带参数地调用o [lambda函数];如果返回值等于sentinel,则将引发StopIteration,否则将返回该值。
如果您的文件非常大,您可能还需要显示进度信息。你可以通过调用一个回调函数来打印或记录计算的字节量:
def md5sum(src, callback, length=io.DEFAULT_BUFFER_SIZE):
calculated = 0
md5 = hashlib.md5()
with io.open(src, mode="rb") as fd:
for chunk in iter(lambda: fd.read(length), b''):
md5.update(chunk)
calculated += len(chunk)
callback(calculated)
return md5
Bastien Semene的代码的混合,考虑了Hawkwing关于通用哈希函数的评论…
def hash_for_file(path, algorithm=hashlib.algorithms[0], block_size=256*128, human_readable=True):
"""
Block size directly depends on the block size of your filesystem
to avoid performances issues
Here I have blocks of 4096 octets (Default NTFS)
Linux Ext4 block size
sudo tune2fs -l /dev/sda5 | grep -i 'block size'
> Block size: 4096
Input:
path: a path
algorithm: an algorithm in hashlib.algorithms
ATM: ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
block_size: a multiple of 128 corresponding to the block size of your filesystem
human_readable: switch between digest() or hexdigest() output, default hexdigest()
Output:
hash
"""
if algorithm not in hashlib.algorithms:
raise NameError('The algorithm "{algorithm}" you specified is '
'not a member of "hashlib.algorithms"'.format(algorithm=algorithm))
hash_algo = hashlib.new(algorithm) # According to hashlib documentation using new()
# will be slower then calling using named
# constructors, ex.: hashlib.md5()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(block_size), b''):
hash_algo.update(chunk)
if human_readable:
file_hash = hash_algo.hexdigest()
else:
file_hash = hash_algo.digest()
return file_hash
实现Yuval Adam对Django的回答:
import hashlib
from django.db import models
class MyModel(models.Model):
file = models.FileField() # Any field based on django.core.files.File
def get_hash(self):
hash = hashlib.md5()
for chunk in self.file.chunks(chunk_size=8192):
hash.update(chunk)
return hash.hexdigest()
import hashlib,re
opened = open('/home/parrot/pass.txt','r')
opened = open.readlines()
for i in opened:
strip1 = i.strip('\n')
hash_object = hashlib.md5(strip1.encode())
hash2 = hash_object.hexdigest()
print hash2