I have a table:
| foo | bar |
+-----+-----+
| abc | 4 |
| def | 1 |
| ghi | 2 |
foo
has unique index.
I want to load a file into this table summing bar
values of duplicates by foo
column:
abc \t 7
ghi \t 3
jkl \t 4
Is there any way to sum values in duplicate rows during LOAD DATA
?
This command replaces old values in a row:
LOAD DATA LOCAL INFILE 'file.txt' REPLACE
INTO TABLE `table` FIELDS TERMINATED BY '\t' (`foo`,`bar`);
I want to have a table like this:
| foo | bar |
+-----+-----+
| abc | 11 |
| def | 1 |
| ghi | 5 |
| jkl | 4 |
There is no direct way to achieve what you want since the source of the records are from a file and not from a table. Follow this steps to insert new row or update existing row.
TempTable
and has two columns: foo
, and bar
table creation ddl:
CREATE TABLE TempTable
(
foo VARCHAR(30),
bar INT
);
LOAD DATA LOCAL INFILE 'file.txt'
REPLACE INTO TABLE TempTable
FIELDS TERMINATED BY '\t' (`foo`,`bar`);
INSERT...ON DUPLICATE UPDATE
, which basically inserts a new records but if the record conflicts with the existing key, it then updates the record. A key
should be define first but in your case, you can directly execute the statement since you have mentioned that foo
is already unique.using INSERT INTO...SELECT
you can combine INSERT..ON DUPLICATE UPDATE
,
INSERT INTO YourExistingTable (Foo, Bar)
SELECT Foo, Bar
FROM TempTable
ON DUPLICATE KEY UPDATE Bar = VALUES(Bar) + Bar;
dropping temporary table,
DROP TABLE TempTable