I have two dictionaries with same keys. Each item is an ndarray.
from numpy import zeros, random
from collections import namedtuple
PhaseAmplitude = namedtuple('PhaseAmplitude','phase amplitude')
dict_keys = {'K1','K2', 'K3'}
J1 = dict.fromkeys(dict_keys, zeros((2,2,2,2)))
U1 = dict.fromkeys(dict_keys, PhaseAmplitude(phase = zeros((2,2)),
amplitude = zeros((2,2))))
for iFld in dict_keys:
U1[iFld] = U1[iFld]._replace(phase = random.random_sample((2,2)),
amplitude = random.random_sample((2,2)))
I want to modify each item in the the first dictionary using the corresponding item in the second one:
for iFld in dict_keys:
J1[iFld][0,0,:,:] += U1[iFld].phase
J1[iFld][0,1,:,:] += U1[iFld].amplitude
I expect to get that J1[iFld][0,0,:,:] = U1[iFld].phase
and J1[iFld][0,1,:,:] = U1[iFld].amplitude
but I get J1[iFld]
being the same for all iFld
and equal to the sum over all iFld
keys of U1
(keeping track of the phase
and amplitude
fields of U1
of course).
To me this looks like a bug but I've been using Python only for a month or so (switching from matlab) so I am not sure.
Question: Is this expected behavior or a bug? What should I change in my code in order to get the behavior I want?
Note: I chose the number of dimensions of dict_keys
, J1
and U1
to reflect my particular situation.
This isn't a bug, though it is a pretty common gotcha that shows up in a few different situations. dict.fromkeys
creates a new dictionary where all of the values are the same object. This works great for immutable types (e.g. int
, str
), but for mutable types, you can run into problems.
e.g.:
>>> import numpy as np
>>> d = dict.fromkeys('ab', np.zeros(2))
>>> d
{'a': array([ 0., 0.]), 'b': array([ 0., 0.])}
>>> d['a'][1] = 1
>>> d
{'a': array([ 0., 1.]), 'b': array([ 0., 1.])}
and this is because:
>>> d['a'] is d['b']
True
Use a dict comprehension to build the dictionary in this case:
J1 = {k: zeros((2,2,2,2)) for k in dict_keys}
(or, pre-python2.7):
J1 = dict((k, zeros((2,2,2,2))) for k in dict_keys)