In case I'm executing set of sql commands within single SqlCommand.ExecuteReader()
may I consider them under transaction by default or I should express transaction explicitly?
Would be .NET
pattern using(var scope = new TransactionScope) {...}
enough to make it transactional in case it isn't?
First part of command batch is declaration of table variable based on value of table being updated in second part.
DECLARE @tempTable TABLE (ID int,...)
INSERT INTO @tempTable (ID,...)
SELECT [ID],.. FROM [Table] WHERE ...
Second part is set of items which should be Updated or Inserted
IF EXISTS(SELECT 1 FROM @tempTable WHERE ID=@id{0})
BEGIN
UPDATE [Table]
SET ... WHERE ...
END ELSE BEGIN
INSERT INTO [Table] ([ID], ...) VALUES (@id{0}, ...)
END
After they're concatenated I'm executing them by single call SqlCommand.ExecuteQuery()
Reason to do that batching is I'd like to minimize recurring transfers of parameters between SQL Server and my app since they're constants in scope of that batch.
Reason to ask is to prevent redundant code and understand whether it's possible that original data in [Table]
(source of @tempTable
) may change before last update/insert fragment of batch is finished.
Executing SqlCommand won't begin any transaction.
You have two choices:
I would use the second.