In this article I'm describing CircleCI k3d/k3s orb that I've created. It allows creating new k8s cluster for each of your builds.
I was trying to setup CI (CircleCI) for one of my repositories (KubeLibrary) and I needed k8s cluster to run test examples. I immediately wanted to setup k3d/k3s cluster, but it turned out not that easy as I thought. I ended up creating k3d orb which allows creating temporary k8s cluster just for tests. In this article I’m describing some of the internals of this orb implemented to fit into CircleCI. If you just want to use it follow orb documentation.
I was already writing about K3d in this article and I’m using it on daily basis for quick tests or development. Shortly speaking K3d is dockerized K3s which in turn is a small (but certified) k8s distribution made by Rancher Labs.
Starting a cluster is just creating containers with k8s nodes, so it shouldn’t be that difficult to make it work in docker environment supported by CircleCI. I’ve quickly learned that fitting k3d into CircleCI needs some gymnastics so I decided to create CircleCI orb to possibly make others people lives easier.
CircleCI orb is just a way of packaging custom jobs, commands and executors. It doesn’t differ much from regular CI definition in .circleci/config.yml but it is allowing to hide some of the complexity.
CircleCI is using remote docker environment, initialized by execution of setup_remote_docker command. Whatever docker command is used it will be executed on a separate vm, where the only interface is docker command itself. As a result, it has couple of implications regarding networking and file transfer - simply speaking, if you want to run anything towards docker containers it should be also running in a container.
On top of that, separate steps executions are not sharing environment variables so there is no direct way of passing values outside the CircleCI command. One of the options is to simply write and read values between the commands.
It is also worth mentioning that remote docker vm has its own resource limits. According to documentation it is 2xIntel(R) Xeon(R) @ 2.3GHz CPU, 8GB RAM and 100GB disk space. It is not too much, but should be enough to cover some of the scenarios and fits into K3s small deployment size (see docs).
To address some of the mentioned limitation and keep k8s experience similar to what you have on your local system I had to create helpers for kubectl and helm. They are all executed as docker containers running in k3d load balancer network to allow reaching kubernetes API on 0.0.0.0 for simplicity.
Additionally to achieve persistency between steps I used couple of volumes for holding helm cache, config and data. Each kubectl and helm execution has also repository content mounted at /repo/ to allow easy application of YAML files or helm charts.
All the setup is done in k3d/k3d-helpers which should be executed before first k3d cluster is created. This the place where the volumes are initialized.
To hide kubectl and helm internals they both have wrappers implemented as bash functions kept in helpers.sh, which is then sourced in every k3d/k3d-run command.
Kubeconfigs are treated very similar to other data, each new cluster writes a file into /.kube/<name of cluster> on separate volume, which is also mounted into kubectl and helm containers. Path to kubeconfig is passed as KUBECONFIG environment variable. Switching of target cluster is done by k3d/k3d-use command that changes the value of K3D_KUBECONFIG globally. Creating new cluster always sets the KUBECONFIG to itself.
Apart from kubernetes cluster API version, both kubectl and helm can be used in specific versions. It is configured by setting environment variables: KUBECTL_VERSION and HELM_VERSION. They can be set for particular step or globally for the job:
To reach kubernetes API directly it is best to place the container in the k3d-${K3D_CLUSTER}-serverlb network, this way the API is exposed on 0.0.0.0:6443. It is exactly how kubectl and helm containers are used.
When k3d creates cluster it assigns random port that is forwarding traffic to 6443 and that is what is landing in kubeconfig eventually. Consequently, small change to cluster.server URL needs to be applied for each of the kubeconfig files to point to original 0.0.0.0:6443.
As I already said, anything that is supposed to target k3d cluster should run in the container, but what if the container is supposed to run in the cluster itself? Own container registry would be perfect, but it is not yet part of the orb. Fortunately, there is simpler approach introduced by k3d called image import. You can simply copy image that you just build into the cluster container cache like in the example below.
It is important to remember that to use container cache imagePullPolicy in kubernetes resources cannot be set to Always which enforces checking newer image in the registry. Example deployment would look similar to following code.
After deploying your own workloads on the cluster it would be good to also know how to expose them to the outside world. In current k3d orb setup you can use at least 3 ways to achieve this. In all the examples I’m using Grafana chart as a target service and reaching it from busybox wget.
Expose via kubectl proxy
As a first option I will describe kubectl proxy which should handle all the networking complexity and expose the pod port on 0.0.0.0. In the k3d orb it is achieved by first getting pod name with kubectl and then using proxy command which is just kubectl in detached container. To reach target port you just need to deploy container in proxy container network as shown below (--network container:proxy).
Another option is to use kubernetes service type LoadBalancer that will forward port on k3d-${K3D_CLUSTER}-serverlb container and forward it to each agent/node and to target pod eventually. SERVICE_IP is set to load balancer container and to be able to reach it we need to put our container (the one running the wget) in the k3d cluster network using --network k3d-${K3D_CLUSTER}.
The last option is to expose containers using ingress. K3s/K3d is equipped with Traefik ingress controller, so the only missing part to start using it is to create ingress resource. In my case it is already handled by the Grafana chart, but it should be fairly easy to reproduce it. The way it works is not that different than load balancer way, there is just extra piece (Traefik) routing by URLs instead of ports. Notice adding the hostname in /etc/hosts of the container in below example.
As a sort of conclusion, I’d like to put together steps to publish your own orb. The docs are there but I felt a bit lost with all of those at the point where I knew what the content should be but not knowing how to make orb out of it.
Write the steps as without using orb, as for any CI definition in .circleci/config.yml
Move the generic orb commands to inline orb definition (still in your .circleci/config.yml). Notice that you might use the orb as it was already published as k3d/<command>. Test and fix.
Install CircleCI CLI as described here you would need CircleCI API token.
Use orb-starter-kit for templating a new repository with your orb. The repository name needs to match orb name, but you can change the repository name afterwards without problems, just update origin with something like: git remote set-url origin https://hostname/USERNAME/REPOSITORY.git.
Migrate what you’ve done till now to new repository. You are no longer keeping things as yaml branches but rather as paths in src folder. The CI definition is in place, you are working on Alpha branch, which works as a staging/develop branch - pull requests into masters are triggering publishing the orb. So you are safe to just play with this branch until you are happy with the code. In my case I made all the examples also as integration tests, so that each commit triggers verification of the examples.
Remember to update Readme.md, badges links and other parts that are still not fitted to your orb.
You can use existing orb repos to see how it works i.e. python orb.
Create first PR into master, which will also publish the orb automatically. Just use [semver:patch|minor|major|skip] in name of the PR merge commit. So [semver:patch] would publish version 0.0.1, another commit with [semver:patch] will publish 0.0.2, [semver:minor] should publish 0.1.0 which you could treat as starting point, and so on and so forth.
Your Alpha branch may look messy at this point, what you could do is just remove Alpha (after it was merged to master) and create new one out of master. Subsequent commits would probably have small changes or be PRs to Alpha.