Search code examples
sql-serverindexingsql-server-2017temporaltemporal-tables

Dropping Indices Before Bulk Load in SQL Server with Temporal Tables?


I am working in SQL Server 2017 and am trying to make use of temporal tables so I can track historical changes to my data. I have never used the temporal tables previously, but I have created many ETL processes in the past with traditional tables and am trying to understand how to optimize the loading of data into the final table. Traditionally when I write my ETL (when I say traditionally, I mean when I'm not using temporal tables), to improve performance, I usually drop all indices on my tables and then bulk load the 10 million or so records that I have into the final table and when the loading is complete, I reapply the indices to the table.

What I'm wondering is if I can go about doing this with temporal tables? It seems like I'd need to have a primary key in place when loading or updating data with temporal tables in order to preserve the history of the tables. But if I'm not able to drop the index, it seems I'll take a performance hit when bulk loading, right? How can I reduce the time to load my tables, if I can't drop indices, while employing temporal table history?


Solution

  • There is no requirement for the system-versioned table to match the indexes on the primary table. You can safely delete/disable non-clustered indexes on the table you are inserting the records. You can check the requirements, too.

    The temporal table are useful for certain scenarios, but I am not using them in ETL processes. In might be better to check column-store indexes. If you create a history table with column store index you can choose archival compression. Then, only inserts the new 10 millions with batch ID. The data footprint will be pretty small.

    Of course, if you need to query the history table often and use system-versioning special syntax, using column-store may hurt the performance.