I am trying to import data into namedtuple. The data is very large and I need to import it efficiently. I am trying
myData = namedtuple('myData', 'div, name, val')
csv.register_dialect('mycsv', delimiter='\t', quoting=csv.QUOTE_NONE)
with open('demand.txt', 'rb') as f:
reader = csv.reader(f,'mycsv')
after this point:
What should I do to import the whole table in demand.txt into a namedtuple in bulk? I see some solutions with for loop but I guess that is inefficient.
I want to be able to obtain the all values under a field, like when I type data.div? Should the correct format be tuples of namedtuples?
To get a list of myData tuples, do:
data = map(myData._make, reader) # or [myData._make(r) for r in reader]
To get all the values of a particular field:
from operator import attrgetter
data_divs = map(attrgetter('div'), data) # or [r.div for r in data]
However, if you're concerned about efficiency, you should be aware that using attribute access with namedtuples is several times slower than indexed access. This will be faster:
from operator import itemgetter
div_idx = myData._fields.index('div')
data_divs = map(itemgetter(div_idx), data) # or [r[div_idx] for r in data]
Both produce the same list of values.