I have the following, which parses three columns of tabular data (an openpyxl worksheet) into a defaultdict
.
def campaigns_and_adsets_and_pageviews_from_ga(ourTab):
d = defaultdict(lambda: defaultdict(int))
for row in ourTab.rows[1:-1]:
if ('Facebook' in row[0].value) and ('(not set)' not in row[2].value):
d[row[1].value][row[2].value] += row[4].value
return d
The output of which looks like the following:
In [790]: campaigns_and_adsets_and_pageviews_from_ga(ourTab)
Out[790]:
defaultdict(<function __main__.<lambda>>,
{u'XXX 20160314': defaultdict(int,
{u'Carnival desktopfeed': 2.0,
u'Carnival mobilefeed': 588.0,
u'PYS Broad desktopfeed': 371.0,
u'PYS Broad mobilefeed': 1192.0}),
u'YYY Intl 20150903': defaultdict(int,
{u'CA desktopfeed': 2.0}),
What I want to do is multiply the final value in each element (i.e. 2.0, 588.0 etc.) by a constant, resulting in another defaultdict
(or even a regular nested dict
would be fine).
Can the defaultdict be deconstructed somehow back into a nested dict in order to allow the transformation to be possible? Or what other approaches are possible?
You could use a simple recursive function that multiplies the numbers with given value and for every dict
instance constructs a new dictionary:
from numbers import Number
def multiply(o, mul):
if isinstance(o, dict):
return {k: multiply(v, mul) for k, v in o.items()}
elif isinstance(o, Number):
return o * mul
else:
return o
Given your example defaultdict
and multiplier 2
as input the output looks following:
{
u'YYY Intl 20150903': {u'CA desktopfeed': 4.0},
u'XXX 20160314': {
u'Carnival desktopfeed': 4.0,
u'PYS Broad desktopfeed': 742.0,
u'PYS Broad mobilefeed': 2384.0,
u'Carnival mobilefeed': 1176.0
}
}
Note that the example doesn't work with lists, for those you'd need to add some more code.