Search code examples
sqlcreate-tabledrop-tabledelete-record

Speed in deleting records in SQL table


I have a temp table used for intermediate calculation in a stored procedure.

Here is code segment:

CREATE TABLE #Updates 
(
    ID int not null,
    ID2 int not null,
    ID3 int not null
);

-- Do some operations and updates
IF OBJECT_ID('tempdb..#Updates','U') IS NOT NULL
    DROP TABLE #Updates;

Since we are doing this a lot in a day. It causes SQL Server performance issue.

I want to change above code to

 IF OBJECT_ID('tempdb..#Updates','U') IS NULL
 BEGIN
     CREATE TABLE #Updates 
     (
         ID int not null,
         ID2 int not null,
         ID3 int not null
     );
END

-- Do some operations and updates

IF OBJECT_ID('tempdb..#Updates','U') IS NOT NULL
    DELETE FROM #Updates

I wonder if the new change will improve the SQL Server performance. If there is a better way to do this, please let me know, too.


Solution

  • Short version of answer

    • Remove the check for whether the temp table exists - these are only useful during development and manual execution of parts of the code
    • Unless you are deleting from the temp table then adding new data in, just let the temp table be removed naturally when the stored procedure finishes
    • Also consider whether a primary key and/or indexes will help

    e.g., I've commented out stuff you don't need below and added a primary key on ID

    -- IF OBJECT_ID('tempdb..#Updates','U') IS NULL
    -- BEGIN
    
    CREATE TABLE #Updates (
        ID int not null PRIMARY KEY,
        ID2 int not null,
        ID3 int not null
    );
    
    -- END
    
    <Do some operations and updates>
    
    -- IF OBJECT_ID('tempdb..#Updates','U') IS NOT NULL
    -- DELETE FROM #Updates
    

    If, within that stored procedure, you

    • Create the temporary table
    • Insert values/data into it
    • Delete everything from it via DELETE FROM #Updates
    • Insert new values into it

    then using TRUNCATE TABLE #Updates will be marginally faster that deleting from it.

    Explanation/longer answer:

    The temporary table is

    • only available during the current session/scope, and
    • deleted when the stored procedure finishes

    If you run the stored procedure twice simultaneously, each will create, use, then delete its own temporary table - and they won't be able to be accessed by each other.

    In terms of improving performance

    • If you are only using that table once (e.g., you create it, insert it, use it in a join, then are finished with it) you can instead move the SQL to be part of the join you are using (e.g., bypass the need to create a temp table). This avoids the cost of creating the temporary table, but may make your estimates in the new larger query worse, meaning poor performance
    • If you are using the table multiple times, you may consider putting indexes and/or a primary key on the temp table so it is sorted for those queries. Use columns that will be useful in joins with the other tables.

    Temporary tables (e.g., in Temp_DB) are typically very fast. They also have some advantages over normal tables as they need much less transaction logging. I'd be surprised if the creation of a temporary table really affects time that much.

    Pinal Dave does a very nice quick video about whether there is an effect of Dropping temporary tables in a stored procedure vs just letting them be removed automatically - in short the answer is 'no'.

    UPDATE: I just did a test on an OK-ish computer that is about 10 years old now.

    CREATE PROCEDURE _TestA AS BEGIN CREATE TABLE #a (a int); END;
    GO
    CREATE PROCEDURE _TestB AS BEGIN CREATE TABLE #a (a int); CREATE TABLE #b (a int); END;
    GO
    
    EXEC _TestA;
    GO 1000
    EXEC _TestB;
    GO 1000
    

    The first took 4 seconds, and the second took 6 seconds. This suggests that creating a temp table should only take a few milliseconds at most.