Search code examples
sql-serverdatabasesql-server-2008visual-studio-2010sql-server-data-tools

Can I ALTER an existing table schema using a post-deployment script?


Using

  • SQL Server 2008 (not R2)
  • Visual Studio 2012 Premium
  • SQL Server Database Project/SQL Server Data Tools (SSDT)

Are there any problems\potential issues with altering a table schema using a post-deployment script?

My company has a web application (App A) whose backend database has tables that are replicated to another company application's database (App B) using CDC Replication. Making schema changes to these tables causes SSDT to use DROP/CREATE when generating the deployment script. This is a problem for App B's database that uses CDC Replication on these tables, because when the table is dropped and recreated, App B's database's CT_[table_name] tables are dropped, bringing App B down. My solution is to use a post-deployment script to make ALTERations to these tables, instead of allowing SSDT to generate DROP/CREATE. Are there any potential problems or issues with this approach?

I could really use some help.


Solution

  • You could conceivably handle such table changes using a Post-Deployment script if you were to exclude those tables from the SSDT project model. This can be achieved in either of the following ways:

    • For each of the table files involved in CDC replication, set the Build Action property to None
    • Or simply remove the affected table files from the project altogether

    This would prevent SSDT from attempting to perform any actions on that table at all, so you wouldn't have to worry about the comparison engine producing scripts that break your CDC instances.

    Naturally, this would mean that any objects that depend on the excluded table objects (such as procs or views) would also need to be moved to Post-Deployment scripts. This would result in the traceability of the database being reduced, as all excluded those tables would no longer have per-file history stored in source control.

    Even if a solution can be found that doesn't result in these drawbacks, for example using a pre-compare script per Ed's excellent blog post, there is the issue of deployment atomicity to consider. If part of the deployment occurs in the script that SSDT generates, and another part occurs in a Post-Deployment script, then it's possible for an error to occur that leaves the database in a half-deployed state. This is because SSDT only uses a transaction for the parts of the deployment that it is responsible for; anything included in a Post-Deployment script will be executed after the initial transaction is committed.

    This means your Post-Deployment script needs to be written in an idempotent way, so that it can be re-executed if something goes wrong (sorry if that is a bit of an obvious statement... it just seems always like a good point to make whenever post-deploy scripts are mentioned!).

    If a higher degree control over the way that your table changes are deployed is desired, without the potential loss of traceability or deployment atomicity, then may I suggest considering a migrations-driven deployment tool like ReadyRoll (disclaimer: I work for Redgate Software). ReadyRoll is a project sub-type of SSDT but uses quite a different deployment style to SSDT: instead of waiting until deployment to find out that the table will be dropped/recreated, the migration script is produced at development time, allowing changes to the sync operations to be made before committing it to source control.

    For more information about how SSDT and ReadyRoll compare, have a look at the ReadyRoll FAQ: http://www.red-gate.com/library/readyroll-frequently-asked-questions-faq