I have a long running redgate script that is applying a bunch of schema type changes to a database. It is going to take 3 hours to run. This script will be run on a production database that has mirroring and transaction log shipping in place.
My specific question is how is transaction log shipping going to be affected by a huge redgate generated script? Its configured: backed up every 15 minutes backed up to local drive shipped to dr server drive applied every 30 mins kept for 60 mins
will it still incrementally be shipping the changes, or if there's one redgate transaction it won't get shipped until it completes?
Concern is that 1. the long running script won't be affected by this transaction log shipping (given its going to span several backups) 2. whether the changes will be shipped incrementally or as one big dump - as I thought redgate typically used one transaction so if it fails it rolls back everything? I know the log file increases a total of about 80 gig so am trying to ensure there is enough room for the transaction log shipping to store whatever it needs to store.
Thanks!
OK so I made it through my upgrade (yay!) and discovered that it didn't ship the entire thing as one big chunk. From their dba I got this information:
It doesn't do it as one big chunk... you'll just have bigger TRN files as you go along. The more often you take TRN backups and ship them and apply them, the smaller you can keep it. However, taking backup obviously requires cpu + i/o... so you don't want to run it continuously.
so whilst I thought the log file would grow to 90g.. and then try to ship some kind of 90g file across it didn't. It just incrementally filled up the transaction log shipping folder and the 60g it had was sufficient for the upgrade :)