Search code examples
c#sql-serversql-server-2012sql-insertbulkupdate

The transaction log for database is full due to 'LOG_BACKUP'


We are doing a delete from a few tables in one database (say FirstDB), and will insert about 93,357 rows (one row at a time ) into one Log table in another database (say SecondDB).

The databases Recovery Model is Full.

Each row contains 6 columns of data

[DeleteTime] [datetime] NULL,
[FilePath] [varchar](255) NOT NULL,
[DocumentID] [int] NULL,
[AnotherCol] [varchar](50) NULL,
[AnotherCol2] [varchar](50) NULL,
[AnotherCol3] [varchar](50) NULL,

On previous insert, when we inserted 153,000 rows, we got the error "The transaction log for database 'SecondDB' is full due to 'LOG_BACKUP'

What is the best way to avoid using a lot of space in the transaction log ?

Shall I commit transaction for say every 1000 insertion to the database SecondDB ?

This is the code to delete from FirstDb and insert into SecondDB

using (SqlConnection con = new SqlConnection(connectionString))
        {
            try
            {
                con.Open();
                SqlDataAdapter da = new SqlDataAdapter("spGetDocuments", con);
                DataSet ds = new DataSet();
                da.Fill(ds);
                foreach (DataRow aRow in ds.Tables[0].Rows)
                {
                    try
                    {
                        //Code to insert into FirstDB

                        //Code to insert into SecondDB
                        cmdSecondDB = new SqlCommand("spUpdateDeleteDocsLog", con);
                        cmdSecondDB.CommandType = CommandType.StoredProcedure;
                        cmdSecondDB.Parameters.Add(new SqlParameter("FilePath", sDocumentPath));
                        cmdSecondDB.Parameters.Add(new SqlParameter("DocumentID", aRow["DocumentID"]));
                        :
                        iRow = cmdSecondDB.ExecuteNonQuery();
                        cmdSecondDB.Dispose();
                        cmdSecondDB = null;
                    }
                }
                ds.Dispose();
                ds = null;
                da.Dispose();
                da = null;
                con.Close();
            }               
        }

Also, is there a way to calculate how much transaction log will be taken by 93,357 rows of the above 6 rows of data ?

Thank you


Solution

  • The databases Recovery Model is Full.

    Then the only thing that matters is the size of your log file and the frequency of your log backups. Batching doesn't cause less log to be written or allow the log space to be reused before the next transaction log backup under the Full Recovery Model.