我想将两个词典融入一个新的词典。
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
z = merge(x, y)
>>> z
{'a': 1, 'b': 3, 'c': 4}
每当两个字典中都有一个关键 k 时,只应保留 y(k) 的值。
我想将两个词典融入一个新的词典。
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
z = merge(x, y)
>>> z
{'a': 1, 'b': 3, 'c': 4}
每当两个字典中都有一个关键 k 时,只应保留 y(k) 的值。
当前回答
这是如此愚蠢,即.update 没有回报,我只是使用一个简单的助理功能来解决问题:
def merge(dict1,*dicts):
for dict2 in dicts:
dict1.update(dict2)
return dict1
例子:
merge(dict1,dict2)
merge(dict1,dict2,dict3)
merge(dict1,dict2,dict3,dict4)
merge({},dict1,dict2) # this one returns a new copy
其他回答
z1 = dict(x.items() + y.items())
z2 = dict(x, **y)
在我的机器上,至少(一个相当常见的x86_64运行Python 2.5.2),替代Z2不仅更短,更简单,而且更快。
% python -m timeit -s 'x=y=dict((i,i) for i in range(20))' 'z1=dict(x.items() + y.items())'
100000 loops, best of 3: 5.67 usec per loop
% python -m timeit -s 'x=y=dict((i,i) for i in range(20))' 'z2=dict(x, **y)'
100000 loops, best of 3: 1.53 usec per loop
示例2:不超越的字典,将252条短线地图到整条,反之亦然:
% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z1=dict(x.items() + y.items())'
1000 loops, best of 3: 260 usec per loop
% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z2=dict(x, **y)'
10000 loops, best of 3: 26.9 usec per loop
z2赢得了大约10的因素,这在我的书中是一个相当大的胜利!
在比较这两个之后,我想知道 z1 的不良性能是否可以归功于构建两个项目列表的顶端,这反过来导致我想知道这个变量是否会更好地工作:
from itertools import chain
z3 = dict(chain(x.iteritems(), y.iteritems()))
% python -m timeit -s 'from itertools import chain; from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z3=dict(chain(x.iteritems(), y.iteritems()))'
10000 loops, best of 3: 66 usec per loop
z0 = dict(x)
z0.update(y)
% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z0=dict(x); z0.update(y)'
10000 loops, best of 3: 26.9 usec per loop
你也可以这样写作
z0 = x.copy()
z0.update(y)
正如托尼所做的那样,但(不令人惊讶)评分的差异显然没有对性能的测量效应。 使用任何人看起来对你是正确的。
from collections import Counter
dict1 = {'a':1, 'b': 2}
dict2 = {'b':10, 'c': 11}
result = dict(Counter(dict1) + Counter(dict2))
这应该解决你的问题。
深深的定律:
from typing import List, Dict
from copy import deepcopy
def merge_dicts(*from_dicts: List[Dict], no_copy: bool=False) -> Dict :
""" no recursion deep merge of two dicts
By default creates fresh Dict and merges all to it.
no_copy = True, will merge all dicts to a fist one in a list without copy.
Why? Sometime I need to combine one dictionary from "layers".
The "layers" are not in use and dropped immediately after merging.
"""
if no_copy:
xerox = lambda x:x
else:
xerox = deepcopy
result = xerox(from_dicts[0])
for _from in from_dicts[1:]:
merge_queue = [(result, _from)]
for _to, _from in merge_queue:
for k, v in _from.items():
if k in _to and isinstance(_to[k], dict) and isinstance(v, dict):
# key collision add both are dicts.
# add to merging queue
merge_queue.append((_to[k], v))
continue
_to[k] = xerox(v)
return result
使用:
print("=============================")
print("merge all dicts to first one without copy.")
a0 = {"a":{"b":1}}
a1 = {"a":{"c":{"d":4}}}
a2 = {"a":{"c":{"f":5}, "d": 6}}
print(f"a0 id[{id(a0)}] value:{a0}")
print(f"a1 id[{id(a1)}] value:{a1}")
print(f"a2 id[{id(a2)}] value:{a2}")
r = merge_dicts(a0, a1, a2, no_copy=True)
print(f"r id[{id(r)}] value:{r}")
print("=============================")
print("create fresh copy of all")
a0 = {"a":{"b":1}}
a1 = {"a":{"c":{"d":4}}}
a2 = {"a":{"c":{"f":5}, "d": 6}}
print(f"a0 id[{id(a0)}] value:{a0}")
print(f"a1 id[{id(a1)}] value:{a1}")
print(f"a2 id[{id(a2)}] value:{a2}")
r = merge_dicts(a0, a1, a2)
print(f"r id[{id(r)}] value:{r}")
在 Python 3.9 中
基于PEP 584的,Python的新版本引入了两个新的词典操作器:union(<unk>)和in-place union(<unk>=)。您可以使用<unk>来结合两个词典,而<unk>=将更新一个词典:
>>> pycon = {2016: "Portland", 2018: "Cleveland"}
>>> europython = {2017: "Rimini", 2018: "Edinburgh", 2019: "Basel"}
>>> pycon | europython
{2016: 'Portland', 2018: 'Edinburgh', 2017: 'Rimini', 2019: 'Basel'}
>>> pycon |= europython
>>> pycon
{2016: 'Portland', 2018: 'Edinburgh', 2017: 'Rimini', 2019: 'Basel'}
使用<unk>的优点之一是它在不同的字典类型上工作,并通过合并保持类型:
>>> from collections import defaultdict
>>> europe = defaultdict(lambda: "", {"Norway": "Oslo", "Spain": "Madrid"})
>>> africa = defaultdict(lambda: "", {"Egypt": "Cairo", "Zimbabwe": "Harare"})
>>> europe | africa
defaultdict(<function <lambda> at 0x7f0cb42a6700>,
{'Norway': 'Oslo', 'Spain': 'Madrid', 'Egypt': 'Cairo', 'Zimbabwe': 'Harare'})
>>> {**europe, **africa}
{'Norway': 'Oslo', 'Spain': 'Madrid', 'Egypt': 'Cairo', 'Zimbabwe': 'Harare'}
您可以使用默认定义,当您想要有效处理丢失的密钥时,请注意, <unk> 保留默认定义,而 {**europe, **africa} 不。
基本用途是更新现有字典,类似于.update():
>>> libraries = {
... "collections": "Container datatypes",
... "math": "Mathematical functions",
... }
>>> libraries |= {"zoneinfo": "IANA time zone support"}
>>> libraries
{'collections': 'Container datatypes', 'math': 'Mathematical functions',
'zoneinfo': 'IANA time zone support'}
当您将字典与字典合并时,两个字典都必须具有适当的字典类型,另一方面,现场运营商(字典=)很高兴与任何字典类似的数据结构合作:
>>> libraries |= [("graphlib", "Functionality for graph-like structures")]
>>> libraries
{'collections': 'Container datatypes', 'math': 'Mathematical functions',
'zoneinfo': 'IANA time zone support',
'graphlib': 'Functionality for graph-like structures'}
到目前为止,我对列出的解决方案的问题是,在合并词典中,关键“b”的值为10,但在我的思维方式上,它应该是12。
import timeit
n=100000
su = """
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
"""
def timeMerge(f,su,niter):
print "{:4f} sec for: {:30s}".format(timeit.Timer(f,setup=su).timeit(n),f)
timeMerge("dict(x, **y)",su,n)
timeMerge("x.update(y)",su,n)
timeMerge("dict(x.items() + y.items())",su,n)
timeMerge("for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k] ",su,n)
#confirm for loop adds b entries together
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k]
print "confirm b elements are added:",x
结果:
0.049465 sec for: dict(x, **y)
0.033729 sec for: x.update(y)
0.150380 sec for: dict(x.items() + y.items())
0.083120 sec for: for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k]
confirm b elements are added: {'a': 1, 'c': 11, 'b': 12}