Let’s deploy Cilium for testing with our Golang web server in below example. We will need a Kubernetes cluster for deploying Cilium. One of the easiest ways we have found to deploy clusters for testing locally is KIND, which stands for Kubernetes in Docker. It will allow us to create a cluster with a YAML configuration file and then, using Helm, deploy Cilium to that cluster.
KIND configuration for Cilium local deploy
Specifies that we are configuring a KIND cluster
The version of KIND’s config
The list of nodes in the cluster
One control plane node
Worker node 1
Worker node 2
Worker node 3
KIND configuration options for networking
Disables the default networking option so that we can deploy Cilium
With the KIND cluster configuration yaml, we can use KIND to create that cluster with following command. If this is the first time you’re runnging it, it will take some time to download all the Docker images for the working and control plane Docker images:
The cluster nodes will remain in state NotReady until Cilium deploys the network. This is normal behavior for the cluster.
Now that our cluster is running locally, we can begin installing Cilium using Helm, a Kubernetes deployment tool.
According to its documentation, Helm is the preferred way to install Cilium. First, we need to add the Helm repo for Cilium. Optionally, you can download the Docker images for Cilium and finally instruct KIND to load the Cilium images into the cluster:
Now that the prerequisites for Cilium are completed, we can install it in our cluster with Helm. There are many configuration options for Cilium, and Helm configures options with --set NAME_VAR=VAR:
Cilium installs serveral pieces in the clster: the agent, the client, the operator, and the cilium-cni plugin:
Agent
The Cilium agent, runs on each node in the cluster. The agent accepts configuration through Kubernetes APIs that describe networking, service load balancing, network policies, and visibility and monitoring requirements.
Client (CLI)
The Cilium CLI client (Cilium) is a command-line tool installed along with the Cilium agent. It interacts with the REST API on the same node. The CLI allows developers to inspect the state and status of the local agent. It also provides tooling to access the eBPF maps to validate their state directly.
Operator
The operator is responsible for managing duties in the cluster, which should be handled per cluster and not per node.
CNI Plugin
The CNI plugin (cilium-cni) interacts with the Cilium API of the node to trigger the configuration to provide networking, load balancing, and network policies pods.
We can observe all these components being deployed in the cluster with the kubectl -n kube-system get pods --watch command:
Now that we have deployed Cilium, we can run the Cilium connectivity check to ensure it is running correctly:
The connectivity test will deploy a series of Kubernetes deployments that will use various connectivity paths. Connectivity paths come with and without service load balancing and in various network policy combinations. The pod name indicates the connectivity variant, and the readiness and liveness gate indicates the success or failure of the test: