Using declarative configuration in Kubernetes
In my previous post I deployed a sample app to my local Kubernetes cluster (run using Rancher Desktop) using imperative commands. That is the easiest way to get started, using the kubectl
CLI to make changes.
Note: If you have come here from that post, you may want to delete those resources in your local test Kubernetes cluster (Deployments, Services, and so on) since this post assumes your local cluster is empty. You can use kubectl get all --namespace=dev
to see what is already running. If - careful! - you want to quickly delete all resources, you can delete the entire dev
namespace using kubectl delete namespace dev
.
The problem with using imperative commands is it’s hard to track changes. You can only see the current, live state. There’s no audit trail. It’s recommended to instead use a declarative configuration. That lets you keep the manifest(s) in source control and so easily keep track of changes.
A declarative configuration uses a manifest file. Those definitions are then applied to the cluster. The cluster computes the difference between the desired state and the current state, and adjusts the cluster accordingly. I’ll use YAML (Yet Another Markup Language).
But just before that, a quick recap of the sample app from the prior post. It’s the public nginx image but slightly modified to use a custom index.html file. I made a new directory and created these two files in it:
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello world</title>
</head>
<body>
<h1>Hello world</h1>
</body>
</html>
Dockerfile
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html
I’ll now need to make a new image of my sample application. For Rancher Desktop, that image needs to be added to the Kubernetes namespace. It comes with the open source nerdctl
instead of docker
. So I’ll run this command from the directory with my index.html
and Dockerfile
in:
$ nerdctl --namespace=k8s.io build -t testing-nginx .
Note: If you get the message “Rancher Desktop is not running. Please start Rancher Desktop to use nerdctl” of course you will need to start it first and give it a minute to run.
Local hostname?
I could serve it from localhost
and then use a custom port like localhost:8080
. However it can get complicated trying to remember which port is which app. I prefer to use a hostname per-app. Unfortunately you can’t have wildcards in the local /etc/hosts
file. And you can run into problems getting a browser to trust a self-signed *.localhost
certificate. Chrome (for example) won’t trust a certificate on a TLD (in this case .localhost
). So that means I can’t have my-app.localhost
and create a self-signed certificate for that. Hmm. I need an additional sub-domain.
I’ll use testing-nginx.k8s.localhost
.
You can pick any domain to run your sample app, however don’t use a real TLD (.com, .net, .dev, and so on). If you do, your browser and/or Kubernetes will try to resolve that hostname.
Self-signed SSL certificate
I’ll serve my sample app locally at testing-nginx.k8s.localhost
so I’ll need to create an SSL certificate for it. On a Mac I can use openssl
. I’ll set this one to expire in one year to avoid Chrome complaining about it expiring too far in the future. It will be a wildcard certificate to save having to create a new certificate for every local Kubernetes app. Adjust your command accordingly:
openssl req \
-newkey rsa:2048 \
-x509 \
-nodes \
-keyout /usr/local/etc/ssl/private/wildcard-k8s-localhost-self-signed.key \
-new \
-out /usr/local/etc/ssl/certs/wildcard-k8s-localhost-self-signed.crt \
-subj /CN=*.k8s.localhost \
-reqexts SAN \
-extensions SAN \
-config <(cat /System/Library/OpenSSL/openssl.cnf \
<(printf '[SAN]\nsubjectAltName=DNS:*.k8s.localhost')) \
-sha256 \
-days 365
Browsers like Chrome won’t trust that though, as it’s self-signed. So since I trust it, I’ll add it to my trusted ones:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /usr/local/etc/ssl/certs/wildcard-k8s-localhost-self-signed.crt
I’ll also need to add that certificate to Kubernetes so it can serve it. I could create it as a secret using kubectl
, like this:
kubectl create secret tls wildcard-k8s-localhost-self-signed-crt --cert=/usr/local/etc/ssl/certs/wildcard-k8s-localhost-self-signed.crt --key=/usr/local/etc/ssl/private/wildcard-k8s-localhost-self-signed.key --namespace=dev
… however in this post I’m using a declarative configuration, putting my resources in a manifest file (see managing secrets using a configuration file). So I won’t do it like that.
Manifest
This is an example Kubernetes manifest that creates a Namespace, Deployment, Service, Ingress and Secret.
I didn’t use an Ingress in the prior post (using imperative commands) as that post was already getting quite long. The Ingress lets you expose HTTP and HTTPS routes outside the cluster to services in the cluster. It allows you to direct requests to the appropriate backend service, based on the path and/or hostname. And terminate SSL (I’ll do that later on). I’m using Rancher Desktop for my local Kubernetes Cluster and that comes with Traefik installed and enabled by default. You just need to provide the Ingress rules.
This manifest is shown all in one file. You may prefer to use a separate manifest file for each resource (one for the Deployment, one for the Service, and so on).
I’ve created my resources in a dev
namespace. You don’t have to use a namespace. That will use the default namespace. However they can help to subdivide a cluster. If you do use a namespace, make sure to use the --namespace=its-name-here
flag in subsequent kubectl
commands, else the resource won’t be found (as it will look for it in the default namespace).
# Namespace
apiVersion: v1
kind: Namespace
metadata:
name: dev
labels:
name: dev
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-nginx
labels:
app: testing-nginx
namespace: dev
spec:
replicas: 1 # the number of pods
selector:
matchLabels:
app: testing-nginx
template:
metadata:
labels:
app: testing-nginx
spec:
containers:
- name: testing-nginx
image: testing-nginx:latest
# note: when running locally with a locally built image, we *don't* want to pull from
# docker as that will result in "ErrImagePull"
imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
# fraction of a cpu /1000
cpu: "200m"
ports:
# note: the port here is informational but handy to say
- containerPort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: testing-nginx-service
namespace: dev
spec:
selector:
app: testing-nginx
ports:
- port: 8080
# targetPort is the port a pod is listening on (normally specified by the image being run)
targetPort: 80
protocol: TCP
---
# Ingress (using the default Traefik that Rancher Desktop comes with)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testing-nginx
namespace: dev
labels:
name: testing-nginx
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- host: testing-nginx.k8s.localhost
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: testing-nginx-service
port:
# the port exposed by the service
number: 8080
tls:
- hosts:
- testing-nginx.k8s.localhost
- secretName: wildcard-k8s-localhost-self-signed-crt
---
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: wildcard-k8s-localhost-self-signed-crt
namespace: dev
data:
tls.crt: # base64 -i /usr/local/etc/ssl/certs/wildcard-k8s-localhost-self-signed.crt
tls.key: # base64 -i /usr/local/etc/ssl/private/wildcard-k8s-localhost-self-signed.key
As mentioned above, in place of the comments for the values of tls.crt
and tls.key
, you would put the output of those commands. They are the self-signed certificate and key created earlier, base64-encoded. They will both be very long strings, like LS0tLS1CRUdJTiBQUklW...
.
Armed with that YAML, I’ll apply it to the cluster. This is what makes this declarative: I’ll declare what I want the cluster to be like. The desired state. And Kubernetes will work to match that:
$ kubectl apply -f kubernetes.yaml
If you had each of those definitions in its own separate file, you could instead apply all those manifests in the directory like this:
$ kubectl apply -f kubernetes/
If you wanted to see the changes, apply the changes, and then confirm what has changed, you could instead run these commands:
$ kubectl diff -f kubernetes/
$ kubectl apply -f kubernetes/
$ kubectl get -f kubernetes/ -o yaml
However you applied the YAML file(s) to your cluser, that should have created the resources listed within it. You should see:
namespace/dev created
deployment.apps/testing-nginx created
service/testing-nginx-service created
ingress.networking.k8s.io/testing-nginx created
secret/wildcard-k8s-localhost-self-signed-crt created
Let’s check the pods we requested (only one) are now running:
$ kubectl get pods --namespace=dev
NAME READY STATUS RESTARTS AGE
testing-nginx-6bb495ffcc-46hf6 1/1 Running 0 19s
Great! If yours are not, the STATUS column should indicate why not. You can also look at kubectl get events --namespace=dev
.
Next, let’s check the ingress rules:
$ kubectl get ing --namespace=dev
NAME CLASS HOSTS ADDRESS PORTS AGE
testing-nginx traefik testing-nginx.k8s.localhost 192.168.1.104 80, 443 94s
That looks good. That’s the hostname I specified in the YAML file. Traefik exposes port 80 and 443 to us. I’ll be using https, so 443.
Next I’ll add a line to my /etc/hosts
file so that I can type in the hostname and have it resolve to that IP:
sudo vi /etc/hosts
You file may already contain lots of lines. Be careful not to overwrite them. Add a new line which has the local Kubernetes node IP (from above), followed by a space, followed by your chosen local hostname:
192.168.1.104 testing-nginx.k8s.localhost
Now if I save that, open a new browser tab, and type it in: https://testing-nginx.k8s.localhost/
…
It works! 🚀
Notice how because I had previously created a self-signed certificate covering *.k8s.localhost
and told my browser to trust it and told the Traefik Ingress to use that certificate when serving the request there is no SSL error shown in Chrome. If there had either been no certificate returned or a self-signed one it did not trust or the Traefik default certificate, it would have (correctly) complained and shown an SSL warning. If you get such a warning, take a look at what might be wrong. Is the certificate valid? Is the default Traefik one being served instead?
Another nice thing about this is there is no need to append an awkward additional port, like 8080
or 8443
. Or add port forwarding rules, whether using kubectl
or in the Rancher Desktop UI.
Alternative approach
I put my self-signed certificate in the dev namespace. However in theory I could have instead replaced the default Traefik certificate (the one that would be used if TLS was needed but no certificate was provided). In which case I would still reference the secret as above, however it would be in the namespace of Traefik which is kube_system
. That way it would avoid specifying it per hostname. However the certificate would need to be a wildcard covering all the hostnames the cluster could serve.
The YAML do that would probably be like this (I haven’t tried it):
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
namespace: kube-system
spec:
defaultCertificate:
secretName: wildcard-k8s-localhost-self-signed-crt