I have a table that is populated by data from a manufacturing system that needs to be normalized. This staging table has a key field - LOTCODE - plus 6 PARTSTATUS fields, a status code (INT) for PART1, PART2, etc.
Every hour I'd like to query the staging table and for each row found insert 6 rows into an archive table which would have key fields LOTCODE and PARTLOCATION plus a third field PARTSTATUS. Once the data is inserted in the archive table the row in the staging table would be deleted.
Is there a way to do this with just queries or do I need to loop though the staging table row by row, insert 6 rows in the archive table and then delete the original row?
The database is MS SQL 2016. Approximately 1440 rows will be added to the staging table each hour by the manufacturing system.
Staging table sample row:
Lot, Part1Status, Part2Status, Part3Status, Part4Status, Part5Status, Part6Status
ABCDE, 1010, 1010, -50, 1010, 990, 1010
Archive table sample data
Lot, PartLocation, Stats
ABCDE, 1, 1010
ABCDE, 2, 1010
ABCDE, 3, -50
ABCDE, 4, 1010
ABCDE, 5, 990
ABCDE, 6, 1010
Here's a TRY/CATCH
script that executes the INSERT
and DELETE
statements inside of an explicit transaction where XACT_ABORT
has been set to ON
which ensures a complete rollback of both DML statements in the event an exception is thrown. Something like this
set nocount on;
set xact_abort on;
begin transaction
begin try
declare @lc table(LOTCODE int unique not null);
insert ArchiveTable(Lot, PartLocation, [Stats])
output inserted.LOTCODE into @lc
select LOTCODE, v.(
from StagingTable st
cross apply (values (1, st.Part1Status),
(2, st.Part2Status),
(3, st.Part3Status),
(4, st.Part4Status),
(5, st.Part5Status),
(6, st.Part6Status)) v(PARTLOCATION, [STATS]);
delete st
from StagingTable st
join @lc lc on st.LOTCODE=lc.LOTCODE;
commit transaction;
end try
begin catch
/* throw / raiserror / logging */
rollback transaction;
end catch