For developers that do not use nodejs, one of the pain points when creating a condensation project for the first time is grabbing node, installing npm, yeoman, and the condensation generator just to get started. Once this is done, very little if any nodejs is needed to build and share particles, but this requirement is offputting to some.

We want to make condensation and the use of particles easy for everyone, no matter what language you use. Docker to the rescue. You may have noticed in recent posts that the Sungard Availability Services (Sungard AS) is actively using Docker and we believe that containers are a great way speed up the development and deployment cycles. With that in mind we created condensation-docker a tool to create, build and deploy condensation particles. The pattern is simple, can be re-applied to many projects and eliminates the need for developers to do anything with nodejs. The only requirement is Docker, which now has native integration with Mac, Windows and Linux.

Let’s Begin

Starting a new condensation project is as simple as:

$ alias condensation="docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -v \"$HOME\"/.aws/credentials:/home/condensation/.aws/credentials -v \`pwd\`:/particles --rm -it sungardas/condensation"

$ condensation create project particles-MYPROJECT
$ cd particles-MYPROJECT
$ condensation run build

Let’s break down what is happening.

Create an alias

First, the docker run command can get lengthy with the options needed for the environment and bind mounts. Let’s create an alias so we don’t have to type it out every time:

$ alias condensation="docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -v \"$HOME\"/.aws/credentials:/home/condensation/.aws/credentials -v \`pwd\`:/particles --rm -it sungardas/condensation"

You can customize this command to fit your needs but the breakdown is as follows:

  • condensation can now be used in the terminal as a shortcut to execute commands in a container
  • If any AWS authentication environments variables are set on the host we are passing them through to the container with the -e options
  • The first -v is creating a bind mount for an AWS credentials file, if you don’t use one you can remove this
  • The second -v bind mounts the current directory on the host to /particles inside the container. This allows the container to create and read files on the host.
  • –rm will remove our container when it has finished executing, keeping our host clean
  • -it enables interactive process (so you can do things like ctl+c)

Start A New Particles Project

Now that our alias is set, we can start a new project by executing a container with condensation and using the create command:

condensation create project particles-MYPROJECT

This is actually a wrapper for a yeoman condensation generator which creates a new particles project in the current directory of the developers host machine. Inside the container the command is executing: yo condensation:project particles-MYPROJECT

Now that the project has been created we can step into the particles project on the host:

cd particles-MYPROJECT

Build And Deploy

Condensation tasks are run with gulp. This can be confusing to developers not familiar with nodejs, but with Docker we simplify the commands. Any gulp task run in a traditional condensation project can be executed with the run-task helper:

$ condensation run-task build
$ condensation run-task deploy

Inside the container the command is executing: /particles/node_modules/.bin/gulp condensation:build and /particles/node_modules/.bin/gulp condensation:deploy

Why Docker?

Docker allows us to wrap all of the bootstrapping for condensation into single image and keeps those dependencies isolated from the developers host machine. This means that the only dependency to start using condensation is the docker engine. The docker-condensation image contains all of the necessary nodejs dependencies to build, ship and deploy condensation particles. Using a bind mount to the host, all particle templating is saved on the developers host machine while all of the execution happens in within the container. The host machine is used to store all the files for the project, but does not need to have any of the dependencies that are needed to compile and deploy.

Kevin is a member of the CTO Architecture Team, which is responsible for researching and evaluating innovative technologies and processes. He has a strong, extensive background in DevOps specifically in developing tools for Operations Support Systems (OSS) within the communications and IT infrastructure industry.

Before arriving at Sungard Availability Services, Kevin spent 13 years at AT&T (by way of AT&T’s acquisition of USi) rising through the engineering ranks to become a Senior Technical Architect for the Hosting Division’s Tools Team.

Kevin holds a B.A. in Economics from the University of Maryland and a Masters in Technology Management from University of Maryland University College.


Comments

0
There are no comments.