In this post, I am going to show you how to quickly implement an open source version control system that is capable of Continuous Integration on AWS for your organization. I will also show you how to create it on your own system, not in AWS, this is trivial with Docker (docker-compose), and I will show you in a way that can be easily shared beyond a single person’s machine. The version control system will have:
- Integrated CI/CD system
- Integrated change management
- Integrated change review
- Integrated notifications
- Integrated task management system for your work tracking/organization
- Integrated project planning
- Integrated collaboration system
- Single Sign On (SSO using Google OAuth2)
All of the bulleted items above are integrated together, but people generally interface with Git either through a CLI or GUI or behind an IDE. The above are organizational or automation tools that help provide value, insight and give control to what is in your version control system. It’s the integration and using them in concert that has the– and I hate to use the buzzword, but in this case it’s really a good use of it– synergistic benefit that gives massive gains, compared to not using these tools. You don’t have to use all these integrated pieces, I’m going to show you a few of them. I’m not mentioning these things to convince you that it does everything and is open source so you will save money, but I’m mentioning them because they are all helpful in creating a good work product, that’s fast, minimizes waste, reduces risk, and has all of these components integrated. This will give you an experience that has significantly less friction than not using tools like this, or using tools that have many different third party tools that are tweaked to work together. The ultimate goal is to minimize friction in the process of writing good code. Code being your software development code, or Infrastructure As Code for how your infrastructure and Applications is deployed. With this integrated seamless environment that I’ve shown how to get going, I feel you will have a much easier transition into an enterprise that can move fast with less risk. If you do try it and like it, and need support or some additional features, you can license for support and some additional features that are not available in the open source version of the software or they have the cloud hosted environment that you can use.
I’m going to show a few ways to get your own Gitlab site up and working on AWS. I will provide you with AWS CloudFormation templates to build a couple ways, and link to others. I will show an example project for how you use the CI/CD part of Gitlab. However, I won’t show the setup of the Gitlab Runner, which from the main Gitlab Server, the “Runner” runs the builds or CI/CD tasks. The Runner requires that the Gitlab Server use SSL Certs and there are various options how to set that up, either with a Certificate Authority or with Self-Signed Certs, and that could be a post in and of itself. I highly recommend that you don’t attempt self-signed certs with CI/CD, if it can be helped. The concern is the interaction that the “Runner” system needs to talk to the server and the containers that are spawned as part of the CI/CD process. They also need the certs, keeping track of the keys and how they are validated in whatever containers you are using, possibly various OS’s, are significantly more trouble than the alternative. I like Letsencrypt to get keys that will easily work for a lab or testing. I personally like Ansible and have had success with this ansible role for those that are interested in Ansible ansible-role-letsencrypt.
Fours Ways to Install Gitlab on AWS:
- Use Gitlab’s AMI, which they maintain (Link here for instructions), this is the newest version and Gitlab is constantly adding new functionality in case something contradicts what I’m showing here.
- A CloudFormation template that generates an A record in AWS Route53. The template assumes you already have a registered domain with AWS Route53. This will go by cfn-gitlab-w-dns.json
- This CloudFormation template is a minimal CloudFormation template that does not generate an A Record. This is the more stripped down, minimal setup (no email, no Oath, uses AWS default DNS name, and no backup to AWS S3). This will go by cfn-gitlab.json, this is the installation that is discussed throughout this article.
- Docker installation link
Note: All of these use port 80 and are not encrypted communication (this is for testing and learning to get your feet wet).
Commentary for templates.
The cfn-gitlab-w-dns template has many options as parameters, such as outgoing email setup (Gmail), oauth2 (Google), AWS S3 automated backup, and the template assumes AWS region us-east-1. If other is preferred, change the Amazon Linux AMI to one that will work in that region. By default, the template will not enable outgoing email, not enable Oath2, and not enable AWS S3 backups, to minimize chances of typos and the setup not building successfully.
Instructions for using the CloudFormation template.
- Download the cfn-gitlab.json template to your machine.
- Go to your AWS Console (us-east-1)
- Click on CloudFormation icon
- Click Create Stack
- Choose the “Upload a template to Amazon S3” radio button.
- Click “Choose File”, that is directly below “Upload a template to Amazon S3”.
- Choose the template you downloaded from step 1
- Click Next
- Type in a Stack Name of Gitlab as an example in the “Stack Name” field.
- Choose your “Key Name” from the Key Name drop down
- Click Next
- Click Next
- Click Create
Next, make sure the stack you created is selected in the far left column, and click the “Events” tab. You should see “CREATE_COMPLETE AWS:CloudFormation::Stack” after about 2 minutes, give or take. At this point, you can click on the “Output” tab, and see the external URL for your new machine. Go to that URL, after waiting another 5 minutes from when the “CREATE_COMPLETE AWS:CloudFormation::Stack” entry is displayed as an event. Then you should be at a “Change your password” screen in your browser (you may need to refresh), and you will have to change the password for the root account. Once you enter a new password and confirm it, and have clicked “Change Password,” you should be taken to the login page. Login with the username of root, and the password you just set. At this point you have a working Gitlab Server, with a default configuration, and only the root user setup.
Troubleshooting hints for the installation if necessary:
Ssh to the instance you created, the name can be found on the output tab in your CloudFormation console. Reference the key name you chose when creating the stack, like, “ssh -i ~/.ssh/public_rsa firstname.lastname@example.org”, obviously changing out the hostname and the path to your private key (this is an example that will be common for Linux or OSX). The files you’ll want to check are /var/log/gitlab/reconfigure/<only file in this directory should be the log from the installation>, /var/log/cloud-init.log, and /var/log/cloud-init-output.log
Log file descriptions:
- /var/log/gitlab/reconfigure/<only file in this directory should be the log from the installation>
- This file’s name will have a timestamp based on the time of when it is created.
- This is the log file for Gitlab, it’s generated when the reconfigure command is run, which is necessary after changing configurations. This is automatically run during the CloudFormation/Cloudinit build of the instance.
- /var/log/cloud-init.log, and /var/log/cloud-init-output.log
- These are the cloud-init log files, they will show the output from the cloud-init process. This is how CloudFormation builds the new instance. So, if there are issues with CloudFormation or build of the system itself, they should be displayed here.
Next, we will create a new user in Gitlab (you can setup Oath(Google/Github) or SSO, but keeping this simple):
- Click the wrench in the top right corner, after logging in.
- Click “New User”
- Enter a name for the new user
- Enter a username for the new user
- Enter an email address for the new user
- Click the “Admin” checkbox if you would like this new user to be an admin
- Click “Create User” at the bottom of the screen
- Click “Edit” to edit the newly created user
- Enter and confirm a password for the new user on this edit screen
- Click “Save Changes”
- Click Impersonate
- Click Profile Settings
- Click SSH Keys
- Find your default public ssh key that you’ll use from your machine.
- Enter a Title for that key
- Click “Add key”
At this point, you should have a functioning Gitlab Server and your own user setup on it. Here is a link for the basics of using Gitlab and setting up your local machine. Minimally, after going thru these 3 parts, Start using Git on the command line, Create and add your SSH Keys, and How to create a project in GitLab, you will have Git setup on your local client, and you will have created a project (repo) in Gitlab. If you have just done the default install, as described above, you won’t have email notifications, CI/CD or SSO setup. You will have task management (issues/milestones), change management (merge request, which is similar to pull requests in other systems), all the activity tracking/reporting for your projects and users, along with all the normal Git functionality and additional visualization related to branches, and merging.
Here is a link to the gitlab-ce documentation for your reference if you want to investigate additional configurations and operations. Note: The CloudFormation templates are Omnibus installations not installed from source. Many times in the documentation there is one way to do something for an Omnibus installation and another for installed from source, so follow the omnibus directions.
I hope, seeing how to build this type of environment will help get you over the hump of getting started, given the number of integrated features you get with it and the relative ease of building it.
In the next part of this series, I will be showing a basic workflow using Git and the CI parts of Gitlab for how you can work with it on a regular basis.
Gregory Cox is a CTO Architect at Sungard Availability Services who specializes in network architecture with a concentration in data centers, service providers, enterprise architecture, Cloud and infrastructure automation. In his 18 years in the industry, he has also held Network Architect and Technical Fellow roles at Interactive Data (ICE, NYSE), Acxiom, Comcast (AT&T Broadband).