When running a large bulk insert to a durable and a non-durable memory optimized table, I'm getting the same speeds for both. Shouldn't the speed of of a bulk insert to a non-durable memory-optimized table be faster than a durable memory-optimized table? If so, what am I doing wrong here?
My test is as below, it takes ~30 seconds consistently. This is on SQL Server 2016 SP1. The bulk insert is 10 million rows from a csv file that I generated.
SQL
CREATE TABLE Users_ND (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_ONLY);
CREATE TABLE Users_D (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
SET STATISTICS TIME ON;
SET NOCOUNT ON;
BULK INSERT Users_ND
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);
BULK INSERT Users_D
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);
users-huge.csv
Id, Username
,user1
,user2
...
,user10000000
Turns out I had this issue due to the fact that the source file for the bulk insert was saved on a slow HDD, so there was a bottleneck in reading the file.