I am trying to store arrays in multiple key groups to generate group-wise summaries. I thought dict of dicts
may be a solution. Based on this answer, I tried to make a dict of dict. Here is the code
import numpy
from collections import defaultdict
s1 = numpy.array([[1L, 'B', 4],
[1L, 'A', 3],
[1L, 'B', 10],
[1L, 'A', 0.0],
[2L, 'A', 11],
[2L, 'B', 13],
[2L, 'B', 1],
[2L, 'A', 6]], dtype=object)
def make_dict():
return defaultdict(make_dict)
d = defaultdict(make_dict)
for x in s1:
d[x[0]][x[1]] = x[2]
So, d[1]['B']
gives 10
while I was expecting [4,10]
. Looks like d is picking up the last combination. Is there a way to append all the values that fit a particular key combination? I thought defaultdict
should take care of this. Where am I going wrong? Is there any other solution to this? I can easily do it in pandas
and I love the library. But I am requiring an non-pandas
solution.
Update The question was answered (@juanpa.arrivillaga ) but looks like my example data was inadequate. How about if we had the following as data?
s1 = numpy.array([
[1L, 'B', 4,3],
[1L, 'A', 3,5],
[1L, 'B', 10,23],
[2L, 'A', 11,1],
[2L, 'B', 1,8],
[2L, 'A', 6,23]
], dtype=object)
We may not be able to use defaultdict(lambda:defaultdict(list))
as the dictionary container. How to extend the solution to include and append 2D-array instead of list. I expect d[1]['A']
should give me [[3,5],[11,1]]
If you are already using numpy
- which, you really shouldn't for heterogeneous data-types, you should just use pandas
:
In [8]: data = [[1, 'B', 4],
...: [1, 'A', 3],
...: [1, 'B', 10],
...: [1, 'A', 0.0],
...: [2, 'A', 11],
...: [2, 'B', 13],
...: [2, 'B', 1],
...: [2, 'A', 6]]
In [9]: import pandas as pd
In [10]: df = pd.DataFrame(data, columns=['c1','c2','c3'])
In [11]: df
Out[11]:
c1 c2 c3
0 1 B 4.0
1 1 A 3.0
2 1 B 10.0
3 1 A 0.0
4 2 A 11.0
5 2 B 13.0
6 2 B 1.0
7 2 A 6.0
In [12]: df.groupby(['c1','c2']).describe()
Out[12]:
c3
count mean std min 25% 50% 75% max
c1 c2
1 A 2.0 1.5 2.121320 0.0 0.75 1.5 2.25 3.0
B 2.0 7.0 4.242641 4.0 5.50 7.0 8.50 10.0
2 A 2.0 8.5 3.535534 6.0 7.25 8.5 9.75 11.0
B 2.0 7.0 8.485281 1.0 4.00 7.0 10.00 13.0
If you must do this without pandas
:
In [13]: from collections import defaultdict
In [14]: grouper = defaultdict(lambda:defaultdict(list))
In [15]: for c1,c2,c3 in data:
...: grouper[c1][c2].append(c3)
...:
In [16]: grouper
Out[16]:
defaultdict(<function __main__.<lambda>>,
{1: defaultdict(list, {'A': [3, 0.0], 'B': [4, 10]}),
2: defaultdict(list, {'A': [11, 6], 'B': [13, 1]})})
In [17]: grouper[1]['B']
Out[17]: [4, 10]
If you are always going to be grouping on the first two columns, just do something like the following:
In [6]: grouper = defaultdict(lambda:defaultdict(list))
In [7]: for c1, c2, *rest in s1:
...: grouper[c1][c2].append(rest)
...:
In [8]: grouper
Out[8]:
defaultdict(<function __main__.<lambda>>,
{1: defaultdict(list, {'A': [[3, 5]], 'B': [[4, 3], [10, 23]]}),
2: defaultdict(list, {'A': [[11, 1], [6, 23]], 'B': [[1, 8]]})})
In [9]: grouper[1]['A']
Out[9]: [[3, 5]]
In [10]: grouper[1]['B']
Out[10]: [[4, 3], [10, 23]]
In [11]: grouper[2]['B']
Out[11]: [[1, 8]]
In [12]: grouper[2]['A']
Out[12]: [[11, 1], [6, 23]]
For Python 2, you will have to modify a little bit, since it lacks support for iterable unpacking:
In [8]: for arr in s1:
...: c1, c2 = arr[:2]
...: rest = list(arr[2:])
...: grouper[c1][c2].append(rest)
...:
In [9]: grouper
Out[9]:
defaultdict(<function __main__.<lambda>>,
{1L: defaultdict(list, {'A': [[3, 5]], 'B': [[4, 3], [10, 23]]}),
2L: defaultdict(list, {'A': [[11, 1], [6, 23]], 'B': [[1, 8]]})})