Problem: I am trying to insert a .csv
file with thousands/millions of rows into a SQL Server table with a bulk insert
query. Now I want to update the table with bulk update where specific column is change e.g where price is changed. How can I do this? And also: I want to ignore constraint when inserting into the table
BULK INSERT Table
FROM 'D:\test.csv'
WITH
(FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n')
GO
Now e.g table contains price column when second time I update the file only that row update which has different price before
Do it in two (or more) steps.
Insert the raw data (flaws and all), and then run a separate update statement to make it look how you want.
For a large or busy table, or in order to keep the new data separate until it's ready, you may also want to first bulk insert into a separate holding table, massage and clean the data there, and then migrate from the holding table to the final table.
Alternatively you can write a client program to pre-clean the data before the bulk insert, or you can use a tool like Sql Server Integration Services (SSIS) to handle the import. SSIS has a lot of nice features for handling this kind of thing.
What you won't be able to do is make a simple or quick adjustment to the bulk insert code. It does what it does, and nothing more.