Made Tech Blog

Continuous Delivery with Jenkins

At Made we’re very keen to have our deployments run as safely, quickly and as easily as possible. When you’ve worked on a new feature for a project you want to get it in the hands of the client and end-user at the earliest possibility.

To do this we use a couple of practices: Continuous Integration and Continuous Delivery.

Continuous Integration

Commit early and often is something you hear being preached at Made a lot. The smaller the chunks of work you develop on during the day, the more you can be keeping in sync with your git repository without the increased risk of having painful merge headaches.

Each time code is pushed into the respository, automated tests are run so that if they fail, you’re aware of any errors at the earliest opportunity. This lets you act on them fast. If errors happen—and they do—going through smaller changesets means you’ll be able to resolve an issue quicker.

Continuous Delivery

The general idea is that all code that is pushed into the repository should be in a state that can be deployed at any time. Knowing a feature is ready, normally means having a test to ensure it works as expected.

If a feature isn’t quite ready then it should be hidden from the end user and/or have a feature switch. This is known as a dark launching.

Once the repository has all of its tests passing for the codebase, get it automatically deployed to an environment in a single easy step. Having deployments in a repeatable condition makes them quick and easy. It also allows you to iron out any issues with your deployment strategy nearer the start of a project.

Continuous Jenkins

Jenkins is the glue that allows us to bring both of these practices together.

The flow that we use at Made resembles a pipeline. The general flow is as follows:

  • Developers work on a feature in their local development environments. Once it works with a passing test, the changes are pushed to the repository. We use git, so this is all done using the master branch. Be sure to treat Jenkins as the gatekeeper for this process. Only ever push into the master branch, and let Jenkins do the rest.
  • Jenkins is continously polling our repository for changes on the master branch. When it detects new changes, it pulls down the code and runs a number of tools to ensure the codebase is stable and in a condition fit for deployment. Those tools can be tests, code standards, linting etc.
  • Upon a successful build we automatically deploy to somewhere that all of the dev team can view the build. We call this the continuous environment.
  • Once we’re happy with the continuous environment we manually hit a button that deploys the code to our staging environment. This is where we let the client take a look at our work, and they can then show it to any of their internal stakeholders.
  • When the client is happy, with one more manual button click, it’s ready to go into the production environment for the end-user.

Setup

To build this flow in Jenkins we use the Build Pipeline Plugin.

We create a number of Jenkins jobs that represent each stage. For an example project named falcon, we create the following jobs:

Job name: falcon-build
Git branch to poll: master
Build steps: Uses a shell command to run our tests and code standards tools
Post build actions: Automatically build falcon-continuous-deploy job on success
Git push into branch: stable

Job name: falcon-continuous-deploy
Git branch to poll: stable
Build steps: Runs a deploy script to our continuous server
Post build actions: Manual build step for falcon-staging-deploy job
Git push into branch: continuous

Job name: falcon-staging-deploy
Git branch to poll: continuous
Build steps: Runs a deploy script to our staging server
Post build actions: Manual build step for falcon-production-deploy job
Git push into branch: staging

Job name: falcon-production-deploy
Git branch to poll: staging
Build steps: Runs a deploy script to our production server
Git push into branch: production

Once all of these Jenkins jobs are created for the steps in the flow, we need to create the pipeline view. From the Jenkins dashboard—where you’ll see all of the jobs you’ve just created—click on the + icon, and select ‘Build Pipeline View’ and give it the name falcon.

For the initial job, select falcon-build. This is the first step in the pipeline. The plugin will then automatically be able to figure out the pipeline based on the post-build steps, both automatic ones (as used for continuous) and manual ones (for staging and production). It’s a nice idea to set the number of display builds to 10 or more so that you get a short historical view of the flow.

Once that’s all done, you’ll have a nice pipeline setup in Jenkins! When you next push some code up, you should see the falcon-build job run automatically. Upon passing, falcon-continuous-deploy will then run. At this point, it requires manual intervention to deploy to the next step using the build button in the falcon-staging-deploy box. Same deal with production.

We really love using this workflow at Made, and have noticed big improvements in speed of delivering our projects. If you need to convince someone higher up the chain, share the big benefits with them:

  • Reduce merge headaches
  • Find errors and resolve them quicker
  • Centralised location for deploys
  • Faster, repeatable deploys
  • Visibility and traceability of all deploys – you know who, when and what was deployed
  • Faster feedback from clients and end-users

About the Author

Avatar for David Winter

David Winter

Senior Technology Adviser at Made Tech

Code, coffee, cake and Eurovision. In no particular order.