Using the same KIND cluster from the Cilium install, let’s deploy the Postgres database(database.yaml) with the follwing YAML and kubectl
$ kubectl apply -f database.yamlservice/postgres createdconfigmap/postgres-config createdstatefulset.apps/postgres createdHere we deploy our web server as a Kubernetes deployment(web.yaml) to our KIND cluster:
$ kubectl apply -f web.yamldeployment.apps/app createdTo run connectivity tests inside the cluster network, we will deploy and use a dnsutils pod(dnsutils.yaml) that has basic networking tools like ping and curl:
$ kubectl apply -f dnsutils.yamlpod/dnsutils createdSince we are not deploying a service with an ingress, we can use kubectl port-forward to test connectivity to our web server:
kubectl port-forward app-5878d69796-j889q 8080:8080Now from our local terminal, we can reach our API:
$ curl localhost:8080/Hello$ curl localhost:8080/healthzHealthy$ curl localhost:8080/dataDatabase ConnectedLet’s test connectivity to our web server inside the cluster from other pods. To do that, we need to get the IP address of our web server pod:
$ kubectl get pods -l app=app -o wideNAME READY STATUS RESTARTS AGE IP NODEapp-5878d69796-j889q 1/1 Running 0 87m 10.244.2.21 kind-worker3Now we can test L4 and L7 connectivity to the web server from the dnsutils pod:
$ kubectl exec dnsutils -- nc -z -vv 10.244.2.21 808010.244.2.21 (10.244.2.21:8080) opensent 0, rcvd 0From our dnsutils, we can test the layer 7 HTTP API access:
$ kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/Hello
$ kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/dataDatabase Connected
$ kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/healthzHealthyWe can also test this on the database pod. First, we have to retrieve the IP address of the database pod, 10.244.2.25. We can use kubectl with a combination of labels and options to get this information:
$ kubectl get pods -l app=postgres -o wideNAME READY STATUS RESTARTS AGE IP NODEpostgres-0 1/1 Running 0 98m 10.244.2.25 kind-workerAgain, let’s use dnsutils pod to test connectivity to the Postgres database over its default port 5432:
$ kubectl exec dnsutils -- nc -z -vv 10.244.2.25 543210.244.2.25 (10.244.2.25:5432) opensent 0, rcvd 0The port is open for all to use since no network policies are in place. Now let’s restrict this with a Cilium network policy. The following commands deploy the network policies so that we can test the secure network connectivity. Let’s first restrict access to the database pod to only the web server. Apply the network policy(layer_3_net_pol.yaml) that only allows traffic from the web server pod to the database:
$ kubectl apply -f layer_3_net_pol.yamlciliumnetworkpolicy.cilium.io/l3-rule-app-to-db createdThe Cilium deploy of Cilium objects creates resources that can be retrieved just like pods with kubectl. With kubectl describe ciliumnetworkpolicies.cilium.io l3-rule-app-to-db, we can see all the information about the rule deployed via the YAML:
$ kubectl describe ciliumnetworkpolicies.cilium.io l3-rule-app-to-dbName: l3-rule-app-to-dbNamespace: defaultLabels: <none>Annotations: <none>API Version: cilium.io/v2Kind: CiliumNetworkPolicyMetadata: Creation Timestamp: 2023-07-25T04:48:10Z Generation: 1 Resource Version: 597892 UID: 61b41c9d-eba0-4aa1-96da-cf534637cbcdSpec: Endpoint Selector: Match Labels: App: postgres Ingress: From Endpoints: Match Labels: App: appEvents: <none>With the network policy applied, the dnsutils pod can no longer reach the database pod; we can see this in the timeout trying to reach the DB port from the dnsutils pods:
$ kubectl exec dnsutils -- nc -z -vv -w 5 10.244.2.25 5432nc: 10.244.2.25 (10.244.2.25:5432): Operation timed outsent 0, rcvd 0command terminated with exit code 1While the web server pod is still connected to the database pod, the /data route connects the web server to the database and the NetworkPolicy allows it:
$ kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/dataDatabase Connected
$ curl localhost:8080/dataDatabase ConnectedNow let’s apply the layer 7 policy. Cilium is layer 7 aware so that we can block or allow a specific request on the HTTP URI paths. In our example policy, we allow HTTP GETs on / and /data but do not allow them on /healthz; let’s test that:
$ kubectl apply -f layer_7_netpol.ymlciliumnetworkpolicy.cilium.io/l7-rule createdWe can see the policy applied just like any other Kubernetes objects in the API:
$ kubectl get ciliumnetworkpolicies.cilium.ioNAME AGEl7-rule 6m54s
$ kubectl describe ciliumnetworkpolicies.cilium.io l7-ruleName: l7-ruleNamespace: defaultLabels: <none>Annotations: API Version: cilium.io/v2Kind: CiliumNetworkPolicyMetadata: Creation Timestamp: 2021-01-10T00:49:34Z Generation: 1 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:egress: f:endpointSelector: .: f:matchLabels: .: f:app: Manager: kubectl Operation: Update Time: 2021-01-10T00:49:34Z Resource Version: 43869 Self Link:/apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/l7-rule UID: 0162c16e-dd55-4020-83b9-464bb625b164Spec: Egress: To Ports: Ports: Port: 8080 Protocol: TCP Rules: Http: Method: GET Path: / Method: GET Path: /data Endpoint Selector: Match Labels: App: appEvents: <none>As we can see, / and /data are available but not /healthz, precisely what we expect from the NetworkPolicy:
$ kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/dataDatabase Connected
$kubectl exec dnsutils -- wget -qO- 10.244.2.21:8080/Hello
$ kubectl exec dnsutils -- wget -qO- -T 5 10.244.2.21:8080/healthzwget: error getting responsecommand terminated with exit code 1These small examples show how powerful the Cilium network policies can enforce network security inside the cluster. We highly recommend that administrators select a CNI that supports network policies and enforce developers’ use of network policies. Network policies are namespaced, and if teams have similar setups, cluster administrators can and should enforce that developers define network policies for added security.
We used two aspects of the Kubernetes API, labels and selectors; in our next section, we will provide more examples of how they are used inside a cluster.