I have a script where I load CSV files into a table with LOAD DATA INFILE REPLACE INTO command.
The files comes from different "stores" and all of the different stores goes into this table.
My question is how I can combine two columns which then becomes the unique, or maybe you have some other idea.
storeid
= will NOT be unique for each row
code
= will be unique for each store id
The goal is to load the csv files into the table and update the a row if something changed, based on storeid and code.
This all worked when I used different databases for each store and had code
as unique, but now I want to have all stores in the same db.
Is combining the columns, e.g. 1+123 = 1123 = new unique the best approach?
Thanks in advance
I ended up using two primary keys as
CREATE TABLE stock(
storeid int,
code int,
PRIMARY KEY (storeid , code)
);
which combines the two fields making them one unique, seems to work on my tests.