Updated: Jan 28
Continuous Integration and its younger cousin, Continuous Delivery, are the bread and butter of any modern Software Development process. Here are the basics of what you should know about them.
"None of us, including me, ever do great things. But we can all do small things, with great love, and together we can do something wonderful." - Mother Teresa
The source code that makes up any software typically resides in a source control system such as Subversion or Git. A source control system enables developers to check out the code, make changes to it, and submit their changes when they are ready.
Continuous Integration is a process by which changes to code are committed as often as possible (hence the term Continuous). Following such a practice has the added side effect of encouraging work items to be broken down into small, independently testable and verifiable changes. This encourages a practice that is Agile in nature.
Continuous Integration also encourages frequent testing of work. The smaller the change is, the easier it is to review and to test. It is also easier to write complete tests for code that is focused on a small change. All of these factors reduce the likelihood of bugs making their way into production.
The use of a distributed source control system such as Git is more prevalent than a centralised one such as Subversion, due to the numerous advantages it offers. In a distributed source control system, developers can commit their changes into their own local branch before pushing their branch onto the server for build and code review, after which their branch will be merged into the main branch. This encourages more frequent commits locally before the work is cleaned up (or 're-based') and pushed.
Regardless of which source control system you use, a general workflow is required where committed code is
These four stages must be complete before the committed code becomes part of the main codebase (the Integration aspect of CI) and then released into production, unless of course the change was rejected for some reason.
Stages 1 and 2 are always automated. Stage 2 here refers to the execution of unit tests and not QA testing. Sometimes this is part of the build process (i.e.) the build fails if one or more tests fail.
A CI server such as Jenkins or Circle CI (among others) is used to build the committed code, execute the test harness, and optionally generate one or more artifacts. For example, for a Java application, the artifact is typically a jar file or a war file.
If the code fails to compile or if there are failing tests, then the CI server indicates that this is the case and it is the responsibility of the person who committed the code to fix it. The developer, of course, should not be relying on the CI server to identify code that does not compile or is failing tests. These should be identified locally via a regular sync / build / test cycle. The less inertia a developer experiences in refreshing their local branch, running builds and tests, the more often the developer is likely to do these things and identify issues early in the development cycle.
If the build is successful on the CI server, the code is then reviewed by one or more developers. If the review results in an approval, the branch containing the new code is then merged into the main branch. A build is then generated from the main branch and the artifact generated by this build is ready to be deployed.
When a reliable CI workflow is in place, this can be extended by automating the deployment of the changes that have been integrated. This is where Continuous Delivery comes into the picture.
A robust Continuous Integration process would mean that code that has gone through all the stages described earlier is reliable enough to be automatically deployed. The Continuous Delivery process can introduce further robustness by having different stages of deployment.
Therefore, code that has just been through the CI process would typically be deployed to a test / QA environment where further testing can be carried out by testers or QA personnel. Deployment to an environment can be as simple as clicking a button. This can be when the code review is complete and the commit is approved (stage 4 mentioned earlier).
Once testing is complete, the deployment can be promoted to a production environment, again by clicking a button. With cloud infrastructures becoming the norm, there are now a number of sophisticated ways in which production deployments can be configured. For example, where you have a number of servers hosting an application, you can use a rolling deployment where the change is deployed to one or two servers at a time. Eventually, all production servers will have the new version of the code. This is favoured in situations where minimal downtime is required. A robust deployment strategy also includes a reliable rollback mechanism in case there are any issues with the deployment.
In a Serverless deployment, traffic can be split so that some of the traffic is directed to the new version and some to the old version. The traffic can then be gradually shifted so that 100% goes to the new version.
The main motivation for Continuous Delivery is the reduction of time and effort in getting code to production. The more mature the CI / CD pipeline is, the quicker it is to get code into production. This enables high frequency releases.
Netflix is one of many organisations that champion the idea of releasing changes multiple times a day. Spinnaker is a Cloud-Native Continuous Delivery platform originally developed by Netflix and is now used by a number of major organisations.
Today, the CD process is becoming more and more integrated into the development cycle to the extent that developers are now expected to be able to deploy their own changes. This is referred to by another term, Continuous Deployment, where every stage of the CI / CD flow is automated (i.e.) manual code reviews or testing are not required. Full automation is only recommended for teams or organisations who have a well-planned and highly mature CI / CD pipeline.
A mature CI / CD pipeline is the backbone of a highly efficient software delivery process and will continue to help many organisations keep up with the demands of quickly evolving software requirements.