I've got a ton of CSV files that are all formated as such:
2015/06/29,13:00,1,194,36,390
As you can probably see i need to ignore the comma that separates the "2015/06/29" and the "13:00" in the above example.
Currenly this is what my sql command looks like:
LOAD DATA local INFILE '"!new!"' IGNORE into table db.table COLUMNS TERMINATED BY ',' IGNORE 3 LINES (@when,col2,col3,col4,col5) SET when=STR_TO_DATE(@when,'%Y/%m/%d %H:%i')" -u user -ppass
Everything imports with the exception of that pesky comma throwing everything off by one column.
Obviously I could import both into separate columns, but they all need to go into one DATETIME formatted column. Anybody have any idea how I would jump through that hoop?
You don't need to "ignore" the comma. Just import the two fields into two separate user-defined variables, and then concatenate those into a single value for the column.
Put the date portion into a user-defined variable, just like you are already doing: @whendt
Put put the second field, the time portion, into a second user-defined variable: @whentm
In the SET
clause, concatenate those two variables together with the CONCAT
function.
CONCAT(@whendt, ' ', @whentm)
For example:
LOAD DATA local INFILE '"!new!"'
IGNORE
INTO table db.table
COLUMNS TERMINATED BY ','
IGNORE 3 LINES
( @whendt -- first column is date portion
, @whentm -- second column is time portion
, col2
, col3
, col4
, col5
)
SET when = STR_TO_DATE( CONCAT(@whendt,' ',@whentm) ,'%Y/%m/%d %H:%i')