I would like to conver a csv to dbf in python by the following way:
import dbf
table = dbf.from_csv('/home/beata/Documents/NC/CNRM_control/CNRM_pr_power1961','CNRM_pr_power1961.dbf')
but I got the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.7/dbf/__init__.py", line 172, in from_csv
mtable.append(tuple(row))
File "/usr/lib/pymodules/python2.7/dbf/tables.py", line 1154, in append
newrecord[index] = item
File "/usr/lib/pymodules/python2.7/dbf/tables.py", line 278, in __setitem__
yo.__setattr__(yo._layout.fields[name], value)
File "/usr/lib/pymodules/python2.7/dbf/tables.py", line 269, in __setattr__
yo._updateFieldValue(fielddef, value)
File "/usr/lib/pymodules/python2.7/dbf/tables.py", line 168, in _updateFieldValue
bytes = array('c', update(value, fielddef, yo._layout.memo))
File "/usr/lib/pymodules/python2.7/dbf/_io.py", line 132, in updateMemo
block = memo.put_memo(string)
File "/usr/lib/pymodules/python2.7/dbf/tables.py", line 424, in put_memo
yo.memory[thismemo] = data
MemoryError
>>>
The size of csv is 2.4 GiB. My ubuntu 14.04 LTS OS type is 64-bit with 31.3 GiB memory and Intel Xeon(R) CPU ES-1660vz@ 3.70GHz x12
Could someone write me what I should to do to fix this error?
Thank you for your help in advance!
The problem you have there is that dbf.from_csv
attempts to create an in-memory table, and your O/S isn't letting you have enough RAM to do so.
To get around that problem I re-wrote from_csv
to write directly to disk if you pass on_disk=True
. Check out PyPI for the latest code.
The remaining problem is in the dbf format itself -- you may run into problems with files that large as the internal structure wasn't designed for such large capacities. If the update doesn't completely solve your problem you'll need to split your csv file and create multiple dbfs out of it.
Feel free to email me directly if you have more questions.