I am troubleshooting a stored procedure that is supposed to send data from a SQL Server instance to another SQL Server instance hosted on Azure.
The way the stored procedure works after investigating it is it first figures out based on parameter what tables it is going to transfer. So I end up with at able variable @transferTable
that looks like
id tableName
===================
23 tableName1
55 tableName23
72 tableName111
Then in a loop, that look is setup like
SELECT @recordID = MIN(id) FROM @transferTable
WHILE @recordID IS NOT NULL
BEGIN
-- Update attempt
UPDATE transferlog
SET LastAttempt = GetDate()
WHERE id = @recordID;
-- attempt
SET @SQLCommand 'Insert into Azure.dbo.table blah blah' -- this part is broken right now
print 'about to run sql command' --this prints
Execute (@SQL Command)
print 'ran sql' --this does not print
-- Update success
UPDATE transferlog
SET LastUpdate = GetDate()
WHERE id=@recordID;
SELECT @recordID = MIN(id) FROM @transferTable WHERE id > @recordID
END
This is the sketch of how this stored procedure works. Right now it is failing on the first table and I know why - there is a NOT NULL constraint on the Azure version of the table and one of the records that it is trying to insert has a NULL value in it. That makes sense enoguh to me.
What does not make sense is the transfer log table is updating both the LastAttempt
and LastUpdate
columns for the first table which has the problematic NULL value in a column, but also for all the tables after it.
Why is this occurring? The expected behavior by the author of the stored procedure though that the LastUpdate
would run if and only if no error occurred but that does not seem to be the case. How can we ensure that LastUpdated column is actually only updated if and only if no errors occurred when INSERT'ing into the Azure SQL Server database?
Not all errors in SQL Server are "equal". Some will not cause a batch to abort, and an error in a deferred batch, by using EXEC (@SQL)
or (better) sys.sp_executesql
, won't cause the outer batch to end (that is actually often desired behaviour as well), not to mention there are some that are "pre-batch" aborting as they occur at the time the batch is parsed.
Take the following batch:
SELECT 1;
EXEC sys.sp_executesql N'SELECT 1 / 0;';
SELECT 2;
EXEC sys.sp_executesql N'SELECT * FROM dbo.DoesNotExist;';
SELECT 3;
This will still result in the result sets with 2
and 3
being returned, despite that both statements executed in a deferred batch error.
In fact, if we move those statements out of deferred batches, then you still get 1
and 2
:
SELECT 1;
SELECT 1 / 0;
SELECT 2;
SELECT * FROM dbo.DoesNotExist;
SELECT 3;
Note, however, that 3
isn't returned, as referencing an object that doesn't exist does result in a batch aborting error in the same scope.
If you want to ensure that the batch is always aborted, then one method is to enable XACT_ABORT
:
SET XACT_ABORT ON;
SELECT 1;
EXEC sys.sp_executesql N'SELECT 1 / 0;';
SELECT 2;
EXEC sys.sp_executesql N'SELECT * FROM dbo.DoesNotExist;';
SELECT 3;
This just results in 1
being returned. You may also want to ensure that you BEGIN
and COMMIT
a TRANSACTION
so that on a failure anything is "automagically" rolled back.
Alternatively, you could use a BEGIN TRY... CATCH
, however, I don't really see the benefit here:
BEGIN TRY
SELECT 1;
EXEC sys.sp_executesql N'SELECT 1 / 0;';
SELECT 2;
EXEC sys.sp_executesql N'SELECT * FROM dbo.DoesNotExist;';
SELECT 3;
END TRY
BEGIN CATCH
THROW;
END CATCH;