Some things to consider when moving large amounts of data to/from SQL Server (any data server really).
- If you can split the file into pieces and load the pieces. Odds are you are not utilizing or anywhere near stressing the SQL Server resources overall and the long duration of the BCP copy is opening you up to any interruption in communication. So, split the file up and load in pieces asynchronously (safest) or load 2, 4 or 10 file pieces at once... see if you can get the DBA out of their chair :).
- If you can place the file on a local drive to the SQL Server. Avoid network/storage potential interruptions. BCP is not forgiving of a break in communication.
- Eliminate any indexing on destination table. BCP into a dumb, heap, empty, boring table. Avoid any extras and use char/varchar columns if you can (avoiding the conversion cpu costs as well).
- You can tweak the batch size that BCP will que up before writing to SQL Server. The default is 1000, but you can crank this up to 100,000 or more. Do some testing to see what works best for you. With larger batch youll save some time, less touching of physical disk (this depends on a lot of other things too thouhg).
- If the file must be pulled across a nework, you can also tweak the network package size. Search for help on calculating your ideal packet size.
- There are other options available, but as others have stated, without more detail you cant get a targeted answer.
Share your bcp command. share structure of file. share details on table you are bcp'ing into. Distributed environment (SQL Server and bcp file?), any network invovled? How many records per second are you getting? Is the file wide or narrow? How wide is the file? 1 billion records of 5 integer columns isnt that much data at all. But 1 billion records that are 2000 bytes wide...thats a monster!