In this guide, I will give you a quick but complete example to create a development to production Continuous Deployment (CD) or Continuous Delivery (CD) pipeline. I want you to have a functioning pipeline that you can iteratively change to fit your project. The only reason I would call this “complex” is that there is a fair amount of simple functional pieces included. Each piece is simple, yet the project covers most – if not all – the pieces you might want to pursue, making it complex by being comprehensive.

Many people get stuck on the need to automate every piece of deploying their automation system. When trying to automate the creation and use of the CD Pipeline, depending on your architecture, you will sooner or later get into a “chicken or egg” situation. This can generally be solved for technology problems, but I’m not trying to solve all possible issues with a universal, completely automated CD Pipeline. I am instead giving you a pipeline that:

  • builds your code
  • tests that build
  • creates the Development Infrastructure
  • deploys the application to the Development Environment
  • tests that application in that Development Environment
  • builds the Production Infrastructure, and
  • deploys the application to that Production Environment with tests.

There are manual “gates” in the process; those are included, but they can be removed easily to allow the pipeline to let the application traverse through the pipeline for a full Continuous Delivery Pipeline. Along the way, there are slack notifications that are sent when things happen. I am presenting this as a quick way to get a working pipeline with most of the major integrations to Code Pipeline, so that you can alter incrementally to what works for you. It is in no way an ideal perfect pipeline, but all the pieces work, and I’ve given you the pieces to tweak so you don’t have to go through all the headaches in making these pieces work together before you can be productive. This will allow you to get over the hump of creating all the underlying pieces to an automated CD Pipeline, so you can alter the pieces that you like following a Continuous Delivery workflow getting all the benefits that Continuous Delivery provides.

There are a few things that are expected ahead of time.

First you should clone this github repo, “git clone” There are two files in the codepipeline-demo repo that control the number of instances, and the KeyPairs that are used for your instances. You will want to update these files to reference your key names. The files are Templates/prod-stack-configuration.json, and Templates/test-stack-configuration.json, and the values are currently prod_key, and test_key respectively. They don’t have to be different, but they must reference keys in your account.  Once you update these values, you will need to cd into the Templates directory of the codepipeline-demo project then run “zip –r ../” Those two json files are what allow you to use the same CloudFormation template for Staging/Test and Production, they allow you to pass in the parameters specific to the environment that the template is being run for. So, different parameters, same template, builds different environments. This way you can be very confident Staging is built the same as Production.

Second, create an S3 Bucket and upload all the contents of the codepipeline-demo repo to that Bucket. The S3 Bucket will need to have versioning enabled. All of this is intended to be created in the us-east-1 AWS region. The other part is the location where your application will exist. My sample is, which is the basic example used by AWS in their own examples. You will want to go into GitHub, Fork this repo for this walkthrough. You will need four pieces from your GitHub project/account.

GitHub User Name(GitHubUser):

GitHub Repo Name(GitHubRepo):

Branch name for that repo(GitHubBranch):

  • This is master by default.

GitHub Token(GitHubToken):

You will need to create a Slack Webhook:

With the above, you should have a GitHub Repo with the four pieces of information above from that repo, a Slack Webhook, and an S3 Bucket with the files from the codepipeline-demo.

The reason for splitting where your files are stored is to have an example of an S3 “Source” and a GitHub “Source” that your pipeline will watch for changes, and the pipeline will run if changes in either of those sources happen.  Plus, the S3 files are for building your infrastructure and the GitHub repo is for changing your Application, if you have separation of duties that align this way, this is an example that the sources can be separate and independent repositories so groups or people can work on and have access to only the one that they need for their job.

Now we want to create your pipeline. The deployment-pipeline.yml file is the CloudFormation template that creates your pipeline, your Artifact Store, SNS topic, CodeBuild project, CodeDeploy application and group, and lambda Slack notifier.

Next, we will go to your AWS console and go into CloudFormation and click Create Stack.  Choose “Upload a template to Amazon S3”, choose the file deployment-pipeline.yml. Click Next, then fill out the empty boxes. The boxes with defaults are necessary for how the template is currently, but those can be changed later as you want to alter this to fit your needs. The empty boxes are arbitrary names you would like to give those pieces, the Slack webhook, the S3 details, or GitHub details referenced above. Click next, then click next again, Check the box that says “I acknowledge that AWS CloudFormation might create IAM resources.” Then click Create.

That will build your pipeline and initiate the initial build of your app, build of your staging environment, testing of that environment, notification via Slack, and then it will hit a “Gate” and stop, waiting on your approval to move forward. You will also receive an email with a link for approving/denying the pipeline to move forward.

The gate part can be removed from your pipeline to allow for a real automated Continuous Delivery Environment, but for the initial run, because there are not enough tests to validate the environment to feel comfortable letting it go to production.

After that runs you can go back to your AWS Console, and go to CodePipeline, and choose the Pipeline that you created.  It should look like the pictures below.

As you will see, once you approve the TestStage, it will move forward with deleting your TestStage infrastructure, and deploying your Production Infrastructure. This is included for a couple of reasons: one, to show you how to delete the infrastructure as part of a pipeline, and two, to save you cost by removing infrastructure that is no longer necessary.

If you don’t want the pipeline to delete the TestStage infrastructure, you can remove this portion below from the deployment-pipeline.yml.

To review all the pieces that are included, here is an example: use a single CloudFormation to build a template that builds; use your Staging infrastructure and Production infrastructure, CodeBuild to build your application; use CodeDeploy to deploy that application to your environments; use Slack notifications, a Lambda function for external validation of your application (albeit basic test); and finally, use Approval Gates to delete infrastructure when no longer necessary.

Using only these same pieces, the Lambda test function can be repeated and expanded to do a massive amount of tests/validations. CodeBuild can be expanded to do more unit tests, and validation tests as part of the build, you can add validation tests into the CloudFormation template for the infrastructure itself. Then you can remove the manual gates and have a completely automated pipeline that Continuously Delivers from build to Production.

I know everyone has always wanted to do Java web development for a website about dogs dressed up in people’s clothing, and wished you had a Continuous Delivery Pipeline for that website. But I do hope this helps you overcome some of the pain of getting started, and that you can be more productive sooner using Continuous Integration, Continuous Deployment or Continuous Delivery practices using CodePipeline and the pieces described.

Gregory Cox is a CTO Architect at Sungard Availability Services who specializes in network architecture with a concentration in data centers, service providers, enterprise architecture, Cloud and infrastructure automation. In his 18 years in the industry, he has also held Network Architect and Technical Fellow roles at Interactive Data (ICE, NYSE), Acxiom, Comcast (AT&T Broadband).


There are no comments.