K3d/K3s in CircleCI
Based on image by Pexels from Pixabay

K3d/K3s in CircleCI

In this article I'm describing CircleCI k3d/k3s orb that I've created. It allows creating new k8s cluster for each of your builds.

I was trying to setup CI (CircleCI) for one of my repositories (KubeLibrary) and I needed k8s cluster to run test examples. I immediately wanted to setup k3d/k3s cluster, but it turned out not that easy as I thought. I ended up creating k3d orb which allows creating temporary k8s cluster just for tests. In this article I’m describing some of the internals of this orb implemented to fit into CircleCI. If you just want to use it follow orb documentation.

Background

I was already writing about K3d in this article and I’m using it on daily basis for quick tests or development. Shortly speaking K3d is dockerized K3s which in turn is a small (but certified) k8s distribution made by Rancher Labs.

Starting a cluster is just creating containers with k8s nodes, so it shouldn’t be that difficult to make it work in docker environment supported by CircleCI. I’ve quickly learned that fitting k3d into CircleCI needs some gymnastics so I decided to create CircleCI orb to possibly make others people lives easier.

CircleCI orb is just a way of packaging custom jobs, commands and executors. It doesn’t differ much from regular CI definition in .circleci/config.yml but it is allowing to hide some of the complexity.

Back to table of contents

CircleCI docker setup

CircleCI is using remote docker environment, initialized by execution of setup_remote_docker command. Whatever docker command is used it will be executed on a separate vm, where the only interface is docker command itself. As a result, it has couple of implications regarding networking and file transfer - simply speaking, if you want to run anything towards docker containers it should be also running in a container.

On top of that, separate steps executions are not sharing environment variables so there is no direct way of passing values outside the CircleCI command. One of the options is to simply write and read values between the commands.

It is also worth mentioning that remote docker vm has its own resource limits. According to documentation it is 2xIntel(R) Xeon(R) @ 2.3GHz CPU, 8GB RAM and 100GB disk space. It is not too much, but should be enough to cover some of the scenarios and fits into K3s small deployment size (see docs).

Back to table of contents

Orb design

Helpers

To address some of the mentioned limitation and keep k8s experience similar to what you have on your local system I had to create helpers for kubectl and helm. They are all executed as docker containers running in k3d load balancer network to allow reaching kubernetes API on 0.0.0.0 for simplicity.

Additionally to achieve persistency between steps I used couple of volumes for holding helm cache, config and data. Each kubectl and helm execution has also repository content mounted at /repo/ to allow easy application of YAML files or helm charts.

All the setup is done in k3d/k3d-helpers which should be executed before first k3d cluster is created. This the place where the volumes are initialized.

docker create -v /.kube --name kubeconfig alpine:3.4 /bin/true
docker create -v /repo --name repo alpine:3.4 /bin/true
docker cp . repo:/repo/
docker volume create  helm_cache
docker volume create  helm_config
docker volume create  helm_data
(...)

To hide kubectl and helm internals they both have wrappers implemented as bash functions kept in helpers.sh, which is then sourced in every k3d/k3d-run command.

(...)
cat \<<EOF > helpers.sh
        export K3D_USING_HELPERS=true
        export K3D_CLUSTER=NotSet
        export K3D_KUBECONFIG=NotSet
        kubectl () {
          docker run --rm --name kubectl -e KUBECONFIG=\$K3D_KUBECONFIG \
          --network container:k3d-\${K3D_CLUSTER}-serverlb \
          --volumes-from kubeconfig \
          --volumes-from repo \
          bitnami/kubectl:\${KUBECTL_VERSION:-latest} \$@
        }
(...)

Back to table of contents

Kubeconfigs

Kubeconfigs are treated very similar to other data, each new cluster writes a file into /.kube/<name of cluster> on separate volume, which is also mounted into kubectl and helm containers. Path to kubeconfig is passed as KUBECONFIG environment variable. Switching of target cluster is done by k3d/k3d-use command that changes the value of K3D_KUBECONFIG globally. Creating new cluster always sets the KUBECONFIG to itself.

Back to table of contents

K3d cluster

New k8s cluster is created using k3d/k3d-up command that accepts following parameters:

  • cluster-name - name of the cluster, used for determining kubeconfig, but also to point to correct cluster load balancer.
  • k3s-version - tag of the k3s container, used to select kubernetes version i.e.: v1.16.14-k3s1
  • agents - number of kubernetes agents, default: 1.

Back to table of contents

Versions

Apart from kubernetes cluster API version, both kubectl and helm can be used in specific versions. It is configured by setting environment variables: KUBECTL_VERSION and HELM_VERSION. They can be set for particular step or globally for the job:

(...)
  jobs:
    test-k3d-versions:
      executor: k3d/default
      environment:
        - KUBECTL_VERSION: 1.16.15
        - HELM_VERSION: 3.3.1
      steps:
(...)

Back to table of contents

Reaching kubernetes API

To reach kubernetes API directly it is best to place the container in the k3d-${K3D_CLUSTER}-serverlb network, this way the API is exposed on 0.0.0.0:6443. It is exactly how kubectl and helm containers are used.

(...)
        helm () {
          docker run --rm --name helm -e KUBECONFIG=\$K3D_KUBECONFIG \
          --network container:k3d-${K3D_CLUSTER}-serverlb \
(...)

When k3d creates cluster it assigns random port that is forwarding traffic to 6443 and that is what is landing in kubeconfig eventually. Consequently, small change to cluster.server URL needs to be applied for each of the kubeconfig files to point to original 0.0.0.0:6443.

Back to table of contents

Running your own containers in the cluster

As I already said, anything that is supposed to target k3d cluster should run in the container, but what if the container is supposed to run in the cluster itself? Own container registry would be perfect, but it is not yet part of the orb. Fortunately, there is simpler approach introduced by k3d called image import. You can simply copy image that you just build into the cluster container cache like in the example below.

docker build -t pyserver -f sample/own_image/Dockerfile .
k3d image import -c ${K3D_CLUSTER} pyserver:latest

It is important to remember that to use container cache imagePullPolicy in kubernetes resources cannot be set to Always which enforces checking newer image in the registry. Example deployment would look similar to following code.

  template:
    metadata:
      labels:
        app: pyserver
    spec:
      containers:
      - name: pyserver
        imagePullPolicy: Never

Back to table of contents

Exposing services

After deploying your own workloads on the cluster it would be good to also know how to expose them to the outside world. In current k3d orb setup you can use at least 3 ways to achieve this. In all the examples I’m using Grafana chart as a target service and reaching it from busybox wget.

Expose via kubectl proxy

As a first option I will describe kubectl proxy which should handle all the networking complexity and expose the pod port on 0.0.0.0. In the k3d orb it is achieved by first getting pod name with kubectl and then using proxy command which is just kubectl in detached container. To reach target port you just need to deploy container in proxy container network as shown below (--network container:proxy).

- k3d/k3d-run:
    step-name: Expose service via kubectl proxy
    command: |
      helm repo add stable https://kubernetes-charts.storage.googleapis.com
      helm install grafana stable/grafana
      sleep 30
      export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}")
      proxy --namespace default port-forward $POD_NAME 3000
      docker run --rm -it --network container:proxy busybox /bin/sh -c "wget 0.0.0.0:3000; cat index.html" | grep Grafana

Back to table of contents

Expose via load balancer

Another option is to use kubernetes service type LoadBalancer that will forward port on k3d-${K3D_CLUSTER}-serverlb container and forward it to each agent/node and to target pod eventually. SERVICE_IP is set to load balancer container and to be able to reach it we need to put our container (the one running the wget) in the k3d cluster network using --network k3d-${K3D_CLUSTER}.

- k3d/k3d-run:
    step-name: Expose service via loadbalancer
    command: |
      helm repo add stable https://kubernetes-charts.storage.googleapis.com
      # Notice repo content available at /repo/ automatically
      helm install grafana stable/grafana -f /repo/sample/grafana/with_lb.yaml
      sleep 30
      export SERVICE_IP=$(kubectl get svc --namespace default grafana -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      # Test if Grafana login page is reachable, exit code 1 means grep didn't found match
      docker run --rm -it --network k3d-${K3D_CLUSTER} busybox /bin/sh -c "wget ${SERVICE_IP}:3000; cat index.html" | grep Grafana

Back to table of contents

Expose via ingress

The last option is to expose containers using ingress. K3s/K3d is equipped with Traefik ingress controller, so the only missing part to start using it is to create ingress resource. In my case it is already handled by the Grafana chart, but it should be fairly easy to reproduce it. The way it works is not that different than load balancer way, there is just extra piece (Traefik) routing by URLs instead of ports. Notice adding the hostname in /etc/hosts of the container in below example.

- k3d/k3d-run:
    step-name: Expose service via ingress
    command: |
      helm repo add stable https://kubernetes-charts.storage.googleapis.com
      # Notice repo content available at /repo/ automatically
      helm install grafana stable/grafana -f /repo/sample/grafana/with_ingress.yaml
      sleep 45
      export INGRESS_IP=$(kubectl get ing --namespace default grafana -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      # Test if Grafana login page is reachable, exit code 1 means grep didn't found match
      docker run --rm -it --network k3d-${K3D_CLUSTER} busybox \
      /bin/sh -c "echo '${INGRESS_IP}  chart-example.local' >> /etc/hosts; \
      wget http://chart-example.local/; cat index.html" | grep Grafana

Back to table of contents

Publishing your own orb

As a sort of conclusion, I’d like to put together steps to publish your own orb. The docs are there but I felt a bit lost with all of those at the point where I knew what the content should be but not knowing how to make orb out of it.

  1. Write the steps as without using orb, as for any CI definition in .circleci/config.yml
  2. Move the generic orb commands to inline orb definition (still in your .circleci/config.yml). Notice that you might use the orb as it was already published as k3d/<command>. Test and fix.
version: 2.1

orbs:
  python: circleci/python@0.3.2
  k3d:
    commands:
      k3d-up:
        description: Starts k8s (k3d/k3s) cluster
        parameters:
          cluster-name:
            description: Name for the k3d cluster
            type: string
            default: myk3d
(...)
jobs:
  test-on-k8s:
    executor: python/default

    steps:
      - setup_remote_docker
      - checkout
      - k3d/k3d-helpers
      - k3d/k3d-up:
          cluster-name: circleci-k8s-1
  1. Install CircleCI CLI as described here you would need CircleCI API token.
  2. Use orb-starter-kit for templating a new repository with your orb. The repository name needs to match orb name, but you can change the repository name afterwards without problems, just update origin with something like: git remote set-url origin https://hostname/USERNAME/REPOSITORY.git.
  3. Migrate what you’ve done till now to new repository. You are no longer keeping things as yaml branches but rather as paths in src folder. The CI definition is in place, you are working on Alpha branch, which works as a staging/develop branch - pull requests into masters are triggering publishing the orb. So you are safe to just play with this branch until you are happy with the code. In my case I made all the examples also as integration tests, so that each commit triggers verification of the examples.
  4. Verify your orb against orb best practices
  5. Remember to update Readme.md, badges links and other parts that are still not fitted to your orb.
  6. You can use existing orb repos to see how it works i.e. python orb.
  7. Create first PR into master, which will also publish the orb automatically. Just use [semver:patch|minor|major|skip] in name of the PR merge commit. So [semver:patch] would publish version 0.0.1, another commit with [semver:patch] will publish 0.0.2, [semver:minor] should publish 0.1.0 which you could treat as starting point, and so on and so forth.
  8. Your Alpha branch may look messy at this point, what you could do is just remove Alpha (after it was merged to master) and create new one out of master. Subsequent commits would probably have small changes or be PRs to Alpha.

Other links:

Back to table of contents

ARTICLES · LINUX
kubernetes k3s k3d CircleCI