I'm trying to read a csv file and extract required data from it. My code looks like below.
import csv
file = "sample.csv"
def get_values_flexibly(file, keyword):
def process(func):
return set([func(cell)] + [func(row[index]) for row in reader])
with open(file, 'r') as f:
reader = csv.reader(f)
first_row = reader.next()
if keyword in first_row:
return str(list(set([row[first_row.index(keyword)] for row in reader])))
for index, cell in enumerate(reader.next()):
if cell.endswith(' ' + keyword):
return str(list(set(process(lambda cell: cell[:-len(keyword) - 1]))))
elif cell.split(':')[0].strip() == keyword:
return str(list(set(process(lambda cell: cell.split(':')[1].strip()))))
print get_values_flexibly(file, 'data')
where sample.csv looks something like below
sample.csv
h1,h2,h3
a data,data: abc,tr
b data,vf data, gh
k data,grt data, ph
I'd like to exclude first column from the output. My current output is ['a','k','b']
but I'd like it to be ['abc', 'vf', 'grt']
instead. How can I achieve this using csv reader?
EDIT- I have multiple files. Each file could have different headers and number of columns varies too. I'd like to have a script that works for all the files. Also, the header of the first column is always the same, "sample_column" for instance. I'd like to skip data from column with header "sample_column".
Ok, so removing the data
(or whichever the keyword is) could be done with a regular expression (which is not really the scope of the question but meh...)
About the regular expression:
Let's imagine your keyword is data
, right? You can use this: (?:data)*\W*(?P<juicy_data>\w+)\W*(?:data)*
If your keyword was something else, you can just change the two data
strings in that regular expression to whatever other value the keyword
contains...
You can test regular expressions online in www.pythonregex.com or www.debuggex.com
The regular expression is basically saying: Look for zero or more data
strings but (if you find any) don't do anything with them. Don't add them to the list of matched groups, don't show them... nothing, just match them but discard it. After that, look for zero or more non-word characters (anything that is not a letter or a number... just in case there's a data
: or a space after , or a data-->
... that \W
removes all the non-alphanumerical characters that came after data
) Then you get to your juicy_data
That is one or more characters that can be found in "regular" words (any alphanumeric character). Then, just in case there's a data
behind it, do the same that it was done with the first data
group. Just match it and remove it.
Now, to remove the first column: You can use the fact that a csv.reader is itself an iterator. When you iterate over it (as the code below does), it gives you a list containing all the columns found in one row. The fact that it gives you a list
of all the rows is very useful for your case: You just have to collect the first item of said row
, since that's the column you care about (you don't need row[0]
, nor row[1:]
)
So here it goes:
import csv
import re
def get_values_flexibly(csv_path, keyword):
def process(func):
return set([func(cell)] + [func(row[index]) for row in reader])
# Start fo real!
kwd_remover = re.compile(
r'(?:{kw})*\W*(?P<juicy_data>\w+)\W*(?:{kw})*'.format(kw=keyword)
)
result = []
with open(csv_path, 'r') as f:
reader = csv.reader(f)
first_row = [kwd_remover.findall(cell)[0] for cell in reader.next()]
print "Cleaned first_row: %s" % first_row
for index, row in enumerate(reader):
print "Before cleaning: %s" % row
cleaned_row = [kwd_remover.findall(cell)[0] for cell in row]
result.append(cleaned_row[1])
print "After cleaning: %s" % cleaned_row
return result
print "Result: %s" % get_values_flexibly("sample.csv", 'data')
Outputs:
Cleaned first_row: ['h1', 'h2', 'h3']
Before cleaning: ['a data', 'data: abc', 'tr']
After cleaning: ['a', 'abc', 'tr']
Before cleaning: ['b data', 'vf data', ' gh']
After cleaning: ['b', 'vf', 'gh']
Before cleaning: ['k data', 'grt data', ' ph']
After cleaning: ['k', 'grt', 'ph']
Result: ['abc', 'vf', 'grt']