Testing with octopus
While looking for some tooling for testing Kubernetes deployments I’ve bumped into octopus made as a part of Kyma project. It allows running tests in similar manner as helm test but treating test definitions and executions as kubernetes resources. I thought it is perfect tool for running tests that are encapsulated as containers, i.e. rf-service that I’m working on. In this article I’m describing how I made KubeLibrary, rf-service and octopus work together.
Table of Contents
Context
This article is a part of series connected with testing on Kubernetes. You can find more info in following articles:
Robot Framework library for testing Kubernetes - in this part I’m describing Robot Framework library (Python) that uses Kubernetes client for getting info about your cluster and turning it into actual test suites.
Testing on kubernetes - rf-service - this article describes Python service executed in a form of CronJob that actually runs the tests from KubeLibrary on kubernetes cluster.
Why octopus is such a good fit?
Testing kubernetes deployments is for sure something that is not that evolved as other parts of kubernetes world. We assume that things defined in YAML files will be delivered as we expect but does it really differ from regular programming? It becomes even more crucial when the final YAML is templated as in helm case. This was noticed by helm team and implemented as helm test functionality. The concept of encapsulating all the needed testing tools in container and running them against deployment is neat, the problem is that those tests are not treated as first class kubernetes citizens.
Kyma-project Octopus took steps towards making this happen, you can find whole motivation in their blog post. Basically, they turned 0/1 helm test approach into handling test as actual resource that can be retried, filtered and executed in parallel. It was used in integration testing context, but could be easily used as recurring health checks.
Now you only need container with your test suites to execute on demand, which is exactly what rf-service is supposed to provide. Till now it was triggered as CronJob so there was schedule and possible repeatability, implemented changes allows taking advantage of running it within octopus. You can find more info about what was actually added in below section.
Essential changes to rf-service
In this article I’m only focusing on running rf-service as standalone container, but it can be started as REST API based service for running tests on demand.
CLI support
Till now rf-service configuration was done by passing .json file with fetcher - functionality for collecting tests suites and publisher - functionality for publishing robotframework results. This meant that there was a need to make the container read some file or be mounted with it - by attaching ConfigMap for example. Running it using octopus demanded adding possibility to define all configuration using CLI parameters.
To avoid maintaining growing list of hardcoded CLI parameters, they are generated dynamically from metadata of fetcher and publisher classes. Meaning that below CLI execution:
is equivalent with following json configuration file:
allowing pretty flexible configuration of rf-service.
Execution by tag
Tagging tests in Robot Framework is powerful way of handling test execution and in general categorizing testcases. Since rf-service fetches complete test suites, allowing executing only part of them is a must and has been mirrored from Robot Framework CLI. To include tags pass -i <tag>
to exclude tags use -e <tag>
. Those values are then passed to Robot Framework so you can expect all the behaviors to be exactly the same.
Dependency resolution
In a path towards making rf-service generic enough to be executed as a base for different kinds of testcases, support for pip requirements was added. This way if fetcher collects directory containing requirements.txt file, it will install packages as with pip install -r requirements.txt
. Just remember first spotted requirements.txt file will be used, so it is best to keep one in top level directory.
Running tests
Alright let’s run some tests then. I will be running testcases from KubeLibrary/testcases and using KubeLibrary as a testing library itself.
As a first step you need to install octopus from the chart provided in its repository.
Then, we need to define what and how the tests will be executed. All those things are configured using kubernetes resources (CRD) brought by octopus. In TestDefinition you define how the test container is executed in similar way kubernetes deployment is defined. In my case, test definition is defined as follows:
Shortly speaking I’m telling rf-service to take tests from KubeLibrary repository (branch master) on path testcases/ and publish results locally in directory somecontext - the actual path doesn’t really matter in this case. The execution should only include tests with tag octopus. All the environment variables are just parameterized test variables, it is attempt to make the tests more generic and possibly use same tests with different test targets, i.e. different services.
You might noticed serviceAccountName: octopus-sa
part, this is needed to run the test container in privileged mode so that it can talk to different parts of kubernetes API and depends on what you really need to test. Below I’m showing the resources that needs to be applied to make it cluster-admin privileged.
The second part that needs to be applied is ClusterTestSuite, which represents how the test should be actually executed. Tests are matched by label, which again allows convenient way of grouping tests. Other parameters are count - how many times test should be repeated, maxRetries - how many times can be repeated if failed and concurrency - how many tests can run in parallel, in our case we only have one test so any changes here takes no effect, but when you are running a lot of tests (TestsDefinitions) it can save a lot of time.
To run the tests just apply all the files using kubectl, starting with serviceaccount.yaml. After couple of seconds you should see new pod being created and inside of it, there will be rf-service running KubeLibrary tests from within the cluster. You should see all the results in container logs, to view them just use regular logs lookup kubectl logs oct-tp-testsuite-selected-by-labels-test-example-0
:
Test execution status is kept in ClusterTestSuite resource, you can use it to view resulting status, start time, completion time and others. This allows getting overview of multiple suites execution and keeping its records.
Conclusion
In my opinion octopus is very interesting project and might play important role whenever kubernetes is used, by filling testing gap. As someone said YAML became k8s programming language and as every programming language needs some holistic approach to testing. Observing what octopus can and what are the proposals seen in the project issues seems it can be good move towards it.
Michał Wcisło ARTICLES · K8S
testing kubernetes RobotFramework Octopus