DB Deployments Part 1: Configure Team City
The first piece of the database DevOps puzzle is automating the build. Team City is a build server that developers use to compile and test their code. It consists of a central server and a number of agents. Agents are secondary machines that will pull code from source control, run tests, compile binaries and assemble deployment packages. Depending on your environment you may only need one machine. Our deployments are currently infrequent enough (mostly because they not needed often but also because they are manual so we avoid it unless absolutely necessary) that we only need on agent and it runs on the same machine as the central server. If we get busier and notice that builds are stacking up in a queue we can add more machines to install agents on that will take on that work as needed.
You will need a service account for the build server and agents. You can create specific users or possibly even a single gMSA to apply to the appropriate services1. Installation is relatively simple: download the installation files from JetBrains. Do that and then come back.
Ready? So the first thing that needs setting up is Authentication: We enabled Microsoft Windows domain authentication as well as the token based authentication module (we like PowerShell on our Team). We also kept the built in authentication (we have admin user credentials stored in a secret server in case we need them). Once that’s sorted you can create some groups to manage access/permissions and then start adding the users that will access the build server. In our case it was the DBA team, BI team and our manager. These are all folks that will want or need to either create new projects or just follow up on the results of build. You’ll also want to configure the mail notifier so that users can be notified when their builds fail (eventually watching the builds progress will become less exciting—or at least that’s what they tell me). Lastly you’ll want to visit the plugins page then click the “Browse plugins repository” button. This will allow you to browse for and install both the Octopus Deploy Integration and the Redgate SQL Change Automation plugins. We’ll be making use of both of these in our builds.
You’ll want to give some thought to how you are going to organize the space: I chose to create team folders in the root directory for the DBA and BI teams to hold our respective projects. There was a non-zero chance that we’d possibly have different enough configurations that it made sense to separate them out. Thankfully if 90% of our configurations will be the same it is very easy to set up a template that can be rolled out and configured for each project. I only needed a single template in the parent directory DBA Projects to create the three distinct builds we wanted for our process:
- builds based on changes to develop,
- builds based on changes to master,
- and builds based on pull requests (from develop into master).
The template consists of five steps and trigger for running it:
- Generate Build Number: Our build processes all take place in SSMS and build numbers and other artifacts that might be generated by Visual Studio or other IDEs is a mystery to me. We decided that our build numbers would be meaningful in as much as they would tell you when the code was built (for example 2020.01.21.1404 was a package that was built at 2:04 on January 21, 2020). The first step runs a simple PowerShell script that sets the build number that will be used by the rest of the Team City process.
- Build Database:2 This step uses Redgate’s SQL Change Automation to actually to ensure the database is deployable as written. A temporary database is created from scratch by the Team City agent. This agent’s service account must have db_creator rights (at a minimum) and if your database contains encrypted objects the service account will need SA so that SQL Compare will operate correctly to compare the encrypted objects to your source code. If you are using DLM Dashboard you can include that information here and Team City will ensure it is updated. We’ve configured this step to point to our development instance and ensured that our unit testing objects are not considered part of the deployment by adding the SQL Compare option “IgnoretSQLt.” If any of our code doesn’t compile this step will fail and the person committing the code will be notified (based on our configured notification rules).
- Run tSQLt: Honestly the best part of automating all of this was the fact that we could roll tSQLt unit tests (via the SQL Test plugin for SSMS) to provide extra validation of our code. This step builds the database again (this time from the nuget package produced by the previous step) and then runs all the unit tests we’ve included in our source code. Again we’ve configured it to use a real SQL Server rather than localdb.
- Push Package to Octopus Deploy: If the previous steps were successful Team City will toss the package over to Octopus Deploy. Octopus Deploy does need to be installed by this point as you will need to generate an API key for Team City to communicate with Octopus Deploy. We’ve configured this step to pass the package over and overwrite any existing packages on the Octopus side.
- Deploy Release: This automates the release creation on the Octopus Deploy server. For this to work for us the Project name in Octopus must be the same as the one in TeamCity as the step will pass its own project name over to identify the project receiving the release along with our custom build number. By default the template specifies “DBA Dev” as the Octopus Environment for the release. This safely enables some automation on the Octopus side (our template won’t ever accidentally deploy code to production accidentally).
Lastly the template trigger is configured to monitor the project’s VCS (version control system) root for changes that would indicate we should test a build. Ours is set to trigger on each check-in and include several check-ins in the build if they are from the same person. Additionally the default for the trigger in our template only looks at the develop branch and specifically ignores master.
whew That feels like a lot for just one article. The next post will cover the Octopus Deploy install and environment so they can work in tandem.
The other two posts in this series are here:
I believe a gMSA would be best here but your mileage may vary. Likely you’ll have to deploy with a traditional service account and then come back and update the service to use the MSA/gMSA. For our use case permissions are pretty binary (either the service(s) will have permissions or they won’t)3. ↩︎
Best practice is deploy to a big boy SQL Server that matches the edition of SQL server that will be deployed to in production. While you can use localdb your builds could fail if your database code includes objects no supported by localdb or worse if the version of localdb doesn’t match your actual environment. ↩︎
Also please note that much of what follows is that path we took due to some trial and error and may not be the single most efficient implementation. We’re all human and we’re doing our best: your mileage may vary! ↩︎