Maybe is a dumb question but how split/paging the insertion to let other operations update the same table.
I've two stored procedures, one that insert bulk data
Stored procedure InsertIntoMyTable
:
INSERT INTO MyTable (column1, Column2, Column3)
SELECT Column1, @Column2, 0
FROM MyOtherTable
Primary key of MyTable
is (Column1, Column2)
And also have a MERGE
operation to the same table MyTable
but from another source, mostly update (the column3) but also can insert data into MyTable
.
The problem is when the insertion on MyTable
takes a lot of time 10 million records, the stored procedure that execute the MERGE
must wait until the InsertIntoMyTable
finishes.
When trying to solve this, added a paginated
DECLARE @Start INT = 1
DECLARE @End INT = 1000
DECLARE @Amount INT = 1000
DECLARE @Total INT
SELECT @Total = COUNT(Column1) FROM MyOtherTable WHERE Column2 = @Column2
WHILE (@Start<=@Total)
BEGIN
INSERT INTO MyTable (column1, Column2, Column3)
SELECT Column1, @Column2, 0
FROM (SELECT
Column1,
Row_number() OVER(ORDER BY Column1) rownumber
FROM MyOtherTable
WHERE Column2 = @Column2) x
WHERE x.rownumber between @start and @end
SET @start = @end+1
SET @End = @End + @Amount
END
but still locking the table until the operation ends.
Note: The execution is not in a transaction.
The execution IS in a transaction - if you don't provide an explicit transaction yourself, SQL Server will use an implicit transaction.
And if you have more than 5000 operations (INSERT
, DELETE
, UPDATE
) in a single transaction, SQL Server will drop individual row locks and instead do a lock escalation and instead exlusively lock the entire table - so no other operations are possible until that (possibly implicit) transaction has committed (or has been rolled back).
The part that inserts in blocks of 1000 rows shouldn't cause a lock escalation - but of course, any rows that's being inserted in the context of that transaction cannot be read or manipulated by another transaction at the same time.