K8s primer - Part I

Tuesday, 08 November 2022

This primer is targeted to developers who mainly work on product features but may also need to understand the basics of Kubernetes at a practical level, so that they know how their applications are deployed and managed.

Requirements: Docker and minikube.

If you are already familiarized with building containerized applications, you can skip directly to step 4.

Second part in K8s Primer - Part II

Let's start

We want to deploy a Go application using Kubernetes (K8s). The application is a simple HTTP server that listens on port 7070 and responds to any request with a “Hello world” followed by a random integer.

File ./main.go

package main

import (
	"fmt"
	"log"
	"math/rand"
	"net/http"
	"time"
)

func main() {
	rand.Seed(time.Now().UnixNano())
	id := rand.Int()

	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprintf(w, "Hello world (ID: %d)\n", id)
	})

	fmt.Printf("Serving on http://localhost:7070 (ID: %d)\n", id)
	log.Fatal(http.ListenAndServe(":7070", nil))
}

We can run this HTTP server locally on your personal machine:

$ go build main.go && ./main # Or just: go run main.go
Serving on http://localhost:7070 (ID: 3628456987918824506)

And we can query the server using curl:

$ curl localhost:7070
Hello world (ID: 3628456987918824506)

So far so good.

K8s runs containerized applications

We cannot just deploy our Go app by “handing over” the compiled binary main to K8s. K8s runs applications within containers. We need to containerize our Go app; I’ll use Docker for that:

File ./Dockerfile

FROM golang:1.19-alpine3.15 as build
WORKDIR /app
COPY main.go ./
RUN go build -o /app/HelloWorld main.go
EXPOSE 7070

FROM alpine:3.15
USER nobody:nobody
COPY --from=build /app/HelloWorld /HelloWorld

CMD ["/HelloWorld"]

The ./Dockerfile contains all the commands needed to build an image that will contain our application binary. Let’s build the image:

$ docker build -t YOUR-HANDLE/hello-world-go:1.0.0 .

The image is now stored in our personal machine:

$ docker image ls                                                
REPOSITORY                     TAG       IMAGE ID       CREATED       SIZE
YOUR-HANDLE/hello-world-go   1.0.0     a2fe58fdd36f   2 seconds ago    11.6MB

We could run our containerized Go app locally by doing:

$ docker run --rm -p 7070:7070 YOUR-HANDLE/hello-world-go:1.0.0

Serving on http://localhost:7070 (ID: 3888774625775762015)

And we can query our containerized Go app just like before: curl localhost:7070.

Let's publish our Go application image

The image we just built is stored in our personal machine, but K8s doesn’t know how to retrieve the image from our file system. The standard approach is to make our container image available via a container registry. I’ll use Docker Hub to publish (for free) my Go app image:

$ docker login -u YOUR-HANDLE
$ docker push YOUR-HANDLE/hello-world-go:1.0.0

(docker login will ask for your personal access token to login into Docker Hub.)

Now our Go app image is publicly available via Docker Hub.

We need some kind of machine/server/node where K8s can run

If we want to run applications on K8s, we first need to figure out where K8s can run. We need hardware. We can use a fully managed K8s service like GKE and EKS, or we can use minikube, open source and free, to run K8s locally on our machine. I’ll use minikube.

K8s has the notion of clusters. A cluster is a set of nodes. A node can be a physical or a virtual machine. We can run K8s itself and our application on nodes.

Let’s create a local K8s cluster of three (virtual) nodes:

$ minikube start --nodes 3 -p minikube-lab

Nice. We got three nodes. Let’s use kubectl, a CLI that lets us interact with K8s, to ask K8s about the status of our cluster:

$ kubectl get nodes

NAME               STATUS   ROLES           AGE    VERSION
minikube-lab       Ready    control-plane   1h    v1.25.2
minikube-lab-m02   Ready    <none>          58s   v1.25.2
minikube-lab-m03   Ready    <none>          58s   v1.25.2

K8s is telling us that we have three nodes, each of them with a unique name and ready for whatever we may need them. One of the nodes works as the control plane of our cluster (i.e., this is where most of the K8s components run to manage the cluster), while the other two nodes are what K8s calls worker nodes (i.e., nodes on which our application can run.)

Let's deploy our Go app in our local K8s cluster

We got a cluster and we got a containerized application, but can’t yet deploy our containerized application on the cluster. We first need a Pod. A Pod can be thought of as a wrapper that allows a container to run on K8s. It’s true that we can run containerized apps on K8s, but K8s requires that any container runs inside a Pod.

App within container within Pod
Fig. 1 Go app within container within Pod.

Let’s create our Pod in a declarative way:

File ./k8s/pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: helloworld-app
  labels:
    app: helloworld-backend
spec:
  containers:
    - image: mypublicrepos/hello-world-go:1.0.0
      name: helloworld-container
      ports:
        - containerPort: 7070
          name: http
          protocol: TCP

This YAML file, also known as a Pod manifest, declares the desired state of our cluster. We are telling K8s:

Hey K8s, create one Pod named helloworld-app which contains inside one container named helloworld-container whose image can be pulled from YOUR-HANDLE/hello-world-go:1.0.0 (from DockerHub) and that runs on port 7070

Now we are finally ready to deploy our app.

Let's deploy a Pod

We can use kubectl (alias k) to submit our ./k8s/pod.yaml manifest to K8s. Behind the curtains a lot things will happen, but what we care about here is that K8s will find a healthy node on our cluster on which our Pod can run:

$ k apply -f k8s/pod.yaml
pod/helloworld-app created

We can check if the Pod is in our cluster, but you may want to wait a little bit before running the following command (it may take some time to pull the image container):

$ k get pods -o wide                                                                        
NAME             READY   STATUS    RESTARTS   AGE     IP           NODE               NOMINATED NODE   READINESS GATES
helloworld-app   1/1     Running   0          2m11s   10.244.2.2   minikube-lab-m03   <none>           <none>

Our Pod is there. It’s running on the node minikube-lab-m03 (a worker node), and it even has its own IP address (10.244.2.2). Let’s get more details about our Pod:

$ k describe pod helloworld-app
Name:         helloworld-app
Namespace:    default
Priority:     0
Node:         minikube-lab-m03/192.168.49.4
Start Time:   Fri, 11 Nov 2022 20:41:23 +0100
Labels:       app=helloworld-backend
Annotations:  <none>
Status:       Running
IP:           10.244.2.2
IPs:
  IP:  10.244.2.2
Containers:
  helloworld-container:
    Container ID:   docker://284cf78c1abd23ba1229b493db35c605004f122e0f7e32222fc8340ea6855efe
    Image:          YOUR-HANDLE/hello-world-go:1.0.0
    Image ID:       docker-pullable://YOUR-HANDLE/hello-world-go@sha256:56345565aae8bd519084145e2990ebdeabba12
    Port:           7070/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 11 Nov 2022 20:41:23 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh2vb (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-vh2vb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m13s  default-scheduler  Successfully assigned default/helloworld-app to minikube-lab-m03
  Normal  Pulled     4m13s  kubelet            Container image "YOUR-HANDLE/hello-world-go:1.0.0" already present on machine
  Normal  Created    4m13s  kubelet            Created container helloworld-container
  Normal  Started    4m13s  kubelet            Started container helloworld-container

That’s a lot of information. Some remarkable stuff:

What can (cannot) we do with our Pod?

We

From our local machine we cannot query the Go app running in the Pod. While the Pod has an IP address, the IP is only reachable from within the K8s cluster. Check this out:

$ curl 10.244.2.2:7070 
curl: (28) Failed to connect to 10.244.2.2 port 7070: Operation timed out

Can’t reach the Pod from my local machine. Let’s SSH into one of the nodes of the K8s cluster and try to query our Go app from there:

$ minikube ssh -p minikube-lab -n minikube-lab-m02
Last login: Fri Nov 11 20:00:05 2022 from 192.168.49.1
docker@minikube-lab-m02:~$ curl 10.244.2.2:7070
Hello world (ID: 8356456912919624511)

Our Go app is reachable from within the cluster! But to reach our Pod we need to know its IP address, and as we’ll see later, that’s not ideal. Pods come and go, they may die unexpectedly.
If Pods are ephemeral, so are their IP addresses. If we don’t want to keep track of which IP address corresponds to which Pod, we’ll to reach Pods in a different way (e.g., by name). But first let’s deploy more stuff.

Let's deploy two identical Pods

A simple way to (horizontally) scale our Go app would be to deploy many instances of it. What if we try to deploy again the same Pod?

$ k apply -f k8s/pod.yaml      
pod/helloworld-app unchanged

We cannot deploy two identical Pods. We could write another manifest (e.g., ./k8s/pod2.yaml) where we give the Pod a different name (e.g., helloworld-app2) and try to deploy that:

$ git diff --no-index k8s/pod.yaml k8s/pod2.yaml 

diff --git a/pod.yaml b/pod2.yaml
index d04ad55..9e15e0a 100644
--- a/pod.yaml
+++ b/pod2.yaml
@@ -1,7 +1,7 @@
 apiVersion: v1
 kind: Pod
 metadata:
-  name: helloworld-app
+  name: helloworld-app2
   labels:
     app: helloworld-backend
 spec:
$ k apply -f k8s/pod2.yaml
pod/helloworld-app2 created
$ k get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP           NODE               NOMINATED NODE   READINESS GATES
helloworld-app    1/1     Running   0          75m     10.244.2.2   minikube-lab-m03   <none>           <none>
helloworld-app2   1/1     Running   0          1m25s   10.244.1.2   minikube-lab-m02   <none>           <none>

There it is. Now we have two instances of our Go app running in two different Pods. The second Pod has been scheduled to a different node (minikube-lab-m02) than the first Pod, but that won’t be always the case. Notice as well that our second Pod got a different IP address (10.244.1.2).

Let's deploy two identical Pods (better way)

You almost never deploy Pods using only a Pod manifest. K8s offers a better way to deploy one or more instances of your application (without having to manually create N different Pod manifest files): via the K8s Deployment object. A K8s Deployment is a higher abstraction than the K8s Pod, and it allows us to describe our application’s lifecycle via features such as scaling, zero-downtime deployments and rollbacks, self-healing, etc.

An application could be composed of N identical Pods running on multiple Nodes, each of them with their own IP address and name. We don’t want to deal with each of them individually; we should care about the bigger picture (we have one application running), not about internal details (our running application is composed of N Pods).

Let’s create a Deployment manifest to deploy our Go application:

File ./k8s/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld-deployment
  labels:
    app: helloworld-backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: helloworld-backend
  template:
    metadata:
      labels:
        app: helloworld-backend
    spec:
      containers:
      - image: mypublicrepos/hello-world-go:1.0.0
        name: helloworld-container
        ports:
          - containerPort: 7070
            name: http
            protocol: TCP

With this manifest file we are telling K8s:

Hey K8s, create a Deployment named helloworld-deployment that creates two replicated Pods. The Pods should be identical (i.e., same container image and port).

Notice how the part under the spec.template.spec key of the manifest file is exactly the same as the one we had in our Pod manifest (./k8s/pod.yaml) under the spec key. We are, indeed, somehow embedding the Pod spec in the Deployment spec. Let’s submit the Deployment manifest to K8s and see if our Pods are created:

$ k apply -f k8s/deployment.yaml
deployment.apps/helloworld-deployment created

$ k get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP           NODE               NOMINATED NODE   READINESS GATES
helloworld-deployment-79fcbf57bc-4zdvg   1/1     Running   0          13s     10.244.2.3   minikube-lab-m03   <none>           <none>
helloworld-deployment-79fcbf57bc-g8ksc   1/1     Running   0          13s     10.244.1.3   minikube-lab-m02   <none>           <none>

We have two pods. Each of them were scheduled to a different node and got a different cluster-wide IP address. Nice. At this point we would no longer need our ./k8s/pod.yaml and ./k8s/pod2.yaml manifest files, so we could as well delete them because our ./k8s/deployment.yaml is enough. Let’s see what K8s has to say about our Deployment:

$ k get deployments -o wide
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS             IMAGES                               SELECTOR
helloworld-deployment   2/2     2            2           1m    helloworld-container   mypublicrepos/hello-world-go:1.0.0   app=helloworld-backend

So our Deployment is managing two Pods as expected. How does our Deployment know about which Pods “belong” to it? Via the selector labels. In our Deployment manifest, the spec.selector defines the labels Pods should have to be managed by the Deployment. The label spec.selector.matchLabels.app has to match the label defined in the Pod template (spec.template.metadata.labels.app).

App within container within Pods within Deployment
Fig. 2 Go app within container within Pods within Deployment.

Deployment's niceties

Pods deployed via Pod manifests are also known as static Pods. If you delete a static Pod (e.g., k delete pod helloworld-app), the Pod is gone forever. The only way to scale static Pods is manually (as we saw, by creating a duplicated Pod manifest file). That’s why in real-world applications we don’t usually deploy Pods via Pod manifests, but via Deployments. Deployments have all the benefits of being monitored and managed by K8s. Let’s examine this.

Run the following command on one terminal:

$ k get pods --watch                  
NAME                                     READY   STATUS    RESTARTS   AGE
helloworld-deployment-79fcbf57bc-8nmnh   1/1     Running   0          11s
helloworld-deployment-79fcbf57bc-ppcb8   1/1     Running   0          19m

The --watch flag allows us to watch for updates in the Pods managed by our Deployment object. On a second terminal, let’s kill one of such Pods:

$ k delete pod helloworld-deployment-79fcbf57bc-ppcb8  
pod "helloworld-deployment-79fcbf57bc-ppcb8" deleted

On the first terminal we see:

$ k get pods --watch                  
NAME                                     READY   STATUS              RESTARTS   AGE
helloworld-deployment-79fcbf57bc-8nmnh   1/1     Running             0          11s
helloworld-deployment-79fcbf57bc-ppcb8   1/1     Running             0          19m
helloworld-deployment-79fcbf57bc-ppcb8   1/1     Terminating         0          19m
helloworld-deployment-79fcbf57bc-mw8gg   0/1     Pending             0          0s
helloworld-deployment-79fcbf57bc-mw8gg   0/1     Pending             0          0s
helloworld-deployment-79fcbf57bc-mw8gg   0/1     ContainerCreating   0          0s
helloworld-deployment-79fcbf57bc-mw8gg   1/1     Running             0          1s
helloworld-deployment-79fcbf57bc-ppcb8   0/1     Terminating         0          19m
helloworld-deployment-79fcbf57bc-ppcb8   0/1     Terminating         0          19m
helloworld-deployment-79fcbf57bc-ppcb8   0/1     Terminating         0          19m

The Pod whose name ends in -ppcb8 is being terminated as requested but not before a new Pod is created (the Pod whose name ends in -mw8gg) and ready to serve. Why does this happen? Because in our Deployment manifest we have specified spec.replicas: 2, so K8s will constantly monitor the state of our cluster to make sure it matches the desired stated our manifest file represents. At the end, we’ll always have two Pods running (the old one -8nmh and the new one -mw8gg):

$ k get pods
NAME                                     READY   STATUS    RESTARTS   AGE
helloworld-deployment-79fcbf57bc-8nmnh   1/1     Running   0          25s
helloworld-deployment-79fcbf57bc-mw8gg   1/1     Running   0          5s

Accessing our Go application

K8s Deployments allow us to manage multiple replicas of the same Pod. They also bring other niceties like self-healing (e.g., if a Pod, managed by a Deployment, dies, K8s takes care of creating a new Pod to substitute the dead one), and auto-scaling (e.g., the number of Pods managed by a Deployment can increase or decrease automatically based on load). But we can’t still do the following:

For the first point we need another K8s object, the K8s Service. But, it’s enough for today. let’s continue tomorrow in Part II. Let’s shutdown our local K8s cluster:

$ minikube stop -p minikube-lab
✋  Stopping node "minikube-lab"  ...
🛑  Powering off "minikube-lab" via SSH ...
✋  Stopping node "minikube-lab-m02"  ...
🛑  Powering off "minikube-lab-m02" via SSH ...
✋  Stopping node "minikube-lab-m03"  ...
🛑  Powering off "minikube-lab-m03" via SSH ...
🛑  3 nodes stopped.

Continues in K8s Primer - Part II

↩ back