I first should preface by saying I love weekend projects and building new things. Over the last year, I have worked on fewer projects than I normally do because of how manual my deployment pipeline was. The following things were painfully manual in my process of deploying a new project:
- Setting up my CI/CD to build the deployable artifact(not a container).
- Provision a droplet for this new project.
- Set up my CI/CD to ssh into a Droplet and deploy/start the project.
- Set up Nginx to reverse proxy to the project.
- Set up letsencrypt to get an SSL for the project.
- No support for a staging environment.
- No support for secrets management.
The above did not always go perfectly and often caused a day or more of work to get a new project deployed. Now that we have talked about the old process, I want to list out the goals of my new deployment setup:
- Ease of use. Drop some files in a repo should be enough to have deployment run.
- No need to provision a new server for each service.
- No need to manually change Nginx.
- No need to set up letsencrypt manual for each project.
- Support for a staging environment.
- Support for secrets in the repo to enable git-ops.
- Generic and reusable for future projects.
My Modern and Reliable Deployment Pipeline
In doing research my new deployment pipeline uses the following:
- Github actions
- Ansible vault
This deployment pipeline allows for a new project to be deployed by adding secrets, generic Kubernetes files, encrypted secrets files and a github action file to the repo.
At this point after all the above steps are handled pushing to staging branch deploys a staging service that is route-able based on the URL specified and deploying to master deploys a production service that is route-able.
I will go through each of the requirements below with the files needed so you can recreate the deployment pipeline for your projects if you want!
This main.yaml above should be stored at .github/workflows/main.yaml
Below is step by step what this Github Action does:
- Checkout - checks out the repo to allow for access to the code.
- Install ansible - installs ansible for later use.
- Write vaultkey to disk - writes vaultkey to disk for use decoding secrets in repo.
- Decrypt dockerpullsecret - decrypts and base64s dockerpullsecrets for Kube using ansible vault + vault key.
- Decrypt production env file - does the same as step 4 for the env variables. This only runs on the master branch.
- Decrypt staging env file - does the same as step 5 but for staging.
- Login, build, tag and publish to docker - this takes the DockerFile in the repo and builds/publishes and image to dockerhub.
- Run template on main.yaml(for production) - swaps all template fields in main.yml Kube file(you will see this file below) for production. Only runs on master.
- Run template on main.yaml(for staging) - swaps all template fields in main.yml Kube file for staging. It only runs on staging.
- deploy to cluster - this applies the main.yaml Kube file to the cluster with the provided Kube config data essentially deploying the application. If you are not using a digitalocean cluster you can remove doctl auth from this step.
The following secret keys are required in your repo to enable the previous GitHub action to work:
- APP_NAME - will be used as the name for the image in dockerhub
- DIGITALOCEAN_ACCESS_TOKEN - specifically if your k8s cluster is on digitalocean like me
- DOCKER_PASSWORD - your dockerhub password
- DOCKER_USERNAME - your dockerhub username
- KUBE_CONFIG_DATA - your Kube config for a given cluster base64'ed
- URL_NO_WWW - The URL you want to access the project on.
- URL_NO_WWW_STAGING - Same as above but for when its deployed to staging.
- VAULTKEY - The vault key used to encrypt your secrets files.
A couple of quick things to note about my Kube cluster that allows for the following YAML to work.
I have Nginx Ingress Controller installed and working.
I also have cert-manager installed, working and in my case, the secret is named bevii-cert
The main.yaml above should be stored at .kube/main.yaml
The passed Kube YAML does the following:
- Creates a namespace for this project scoped to the environment(staging or production) Everything after this is scoped to the same namespace.
- Creates a docker pull secret to enable the deployment of private images.
- Creates a secret for env variables that will be mounted into the deployment.
- Creates a service to expose the deployment to traffic routable by the ingress.
- Creates an ingress with some annotations. This specifies the URL that the service will be access-able from.
- Creates a deployment with the env variables from step 3 mounted into the container as a volume. It also sets the image pull secret from step 2 to pull private images.
Additional setup and repo structure
You also want the following files inside your repo:
- dockerpullsecret.txt - inside the .kube folder this in an ansible encrypted docker pull secret. Look more on Kubernetes website for the structure of this secret.
- env-production.txt - inside the .kube folder this is your production environment variables.
- env-staging.txt - inside the .kube folder this is your staging environment variables.
- DockerFile - inside the base of your repo, should be specific to containerizing this specific project.
The things talked about in this post allow for me to drop in a GitHub actions file, a Kubernetes YAML file, some secrets files, and setup up some secrets on the repo. After doing that all I need to do is push to master or staging to get a service up and running / route-able and SSL secure.
In fact, setting up this blog using my new deployment pipeline and took around 5 minutes to be up and running compared to the 1 or 2 days previously for provisioning servers/Nginx configuration/ CI/CD changes and SSL work with lets-encrypt.
I hope you enjoyed hearing about the work that went into my modern deployment pipeline. Look out for future posts!
Till next time. ✌️