Nasstar streamlines the pipeline (in AWS)
The concept of a continuous integration and delivery (CI/CD) pipeline is not a new one, for many it is the cornerstone of their DevOps transformation; it is the easiest thing to point at and say “We do DevOps”. It's also one of the most noticeable lifestyle changes for the whole team once it has been implemented.
Certainly, for the Nasstar AWS DevOps team, it's one of our proudest accomplishments; enabling us to:
- orchestrate our AWS CloudFormation deployments to automatically compare the AWS CloudFormation in our Git repositories to that in the AWS accounts
- deploy our dev environments differently from our production environments thanks to environmental awareness
- run automated tests to ensure deployments happened when we needed them to
The benefits of the pipeline were felt almost immediately once the first project was implemented: no more fiddling around with individual AWS CloudFormation templates, no more questioning which code version is deployed in each environment and no more worrying about whether running the latest code will overwrite something your colleague did without telling you. Given that this particular project had over a hundred stacks for each of the six environments, I can’t imagine what it would have been like without the pipeline being set up upfront. But being the good little DevOps team that we are, we knew we could always do better!
Initial improvements
Over the course of our next few major projects, the pipeline was developed and improved in many ways and it felt like we were making progress with the following:
- taking the AWS CloudFormation orchestration tool in-house, forking it from the original public repo to allow us to add the features we cared about most
- migrating from Jenkins to a combination of AWS CodePipeline and CodeBuild to reduce costs and minimise server management overhead
- adding customisable modules, support for the Serverless Application Model, GitHub and AWS CodeCommit merge triggers, all kinds of features
The challenges
Eventually, we realised that because every project had iterated and customised the pipeline, each had a different version of it. The core mechanics were the same, but the practicalities of triggering a deployment or making changes were different each time. When they went into support, we had to explain to everyone who hadn’t been working on the project what the differences were this time and write all new support documentation.
This meant every project involved a new learning curve and days of handover efforts, then it became yet another snowflake, permanently increasing the number of different ways of working to be supported. On top of that, each project had to commit effort to harvest the latest code, strip out project-specific elements and add in the new project requirements meaning that only the very largest projects could afford to implement our CI/CD, so we were still sending projects to support that required manual CloudFormation updates, weren’t guaranteed to be synced to Git and had all the other old problems from the pre-DevOps world.
Our solution, the standard pipeline
What we needed was a standardised pipeline. We needed a series of scripts, confluence pages and examples that would allow anyone in the practice to create and deploy a brand new CI/CD shell into an empty account in a couple of hours rather than a couple of weeks. This was primarily a documentation process. It involved harvesting the best parts of our latest projects’ CloudFormation and scripts, placing them together in a central accessible Git Repo, then documenting how to create each part of the pipeline.
The idea is that for any new project when the new Git repo is created, it is initially populated with a copy of the code from the central “standard pipeline repository” that includes a framework and basic series of scripts. These pre-created scripts create the initial CodePipeline jobs in the accounts and the rest is built by them. Then, for the simplest projects, it's just a case of adding in CloudFormation templates and associated parameter files, merging them into Git and allowing the git hooks to trigger the pipeline and build the resources. Additional documentation is then available for the more complex projects to guide the creation of complex modules, multi-environment token replacement and any other additional features that have been used in other projects previously.
In this way, a single source of truth can be maintained for all pipelines, and the simplest projects can have a pipeline with minimal effort, whilst the larger projects can continue to customise locally as necessary from a common base, minimising pipeline feature creep.
The standard pipeline will allow even the smallest projects to access CI/CD (including enabling the retrofitting of legacy projects that originally missed out). It will also reduce the complexity of new projects coming into support by maintaining a consistent deployment mechanism moving forwards. Finally, even our largest AWS projects will be more efficient as the time taken to deploy the CI/CD will be dramatically reduced, enabling the developers to focus on the important aspects of AWS infrastructure development, both increasing quality and reducing the cost to the customer without sacrificing long-term supportability.
The standard pipeline will continue to be an area of investment and innovation to ensure we continue to reap the benefits of CI/CD as we move forward.