Hello Kubernauts! Welcome to the “Kubernetes in a nutshell” blog series 🙂
This is the first part which will cover native Kubernetes primitives for managing stateless applications. One of the most common use cases for Kubernetes is to orchestrate and operate stateless services. In Kubernetes, you need a
Pod (or a group of
Pods in most cases) to represent a service or application – but there is more to it! We will go beyond a basic
Pod and get explore other high level components namely
As always, the code is available on GitHub
You will need a Kubernetes cluster to begin with. This could be a simple, single-node local cluster using
Docker for Mac etc. or a managed Kubernetes service from Azure (AKS), Google, AWS etc. To access your Kubernetes cluster, you will need
kubectl, which is pretty easy to install.
e.g. to install
kubectl for Mac, all you need is
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && \ chmod +x ./kubectl && \ sudo mv ./kubectl /usr/local/bin/kubectl
If you are interested in learning Kubernetes and Containers using Azure, simply create a free account and get going! A good starting point is to use the quickstarts, tutorials and code samples in the documentation to familiarize yourself with the service. I also highly recommend checking out the 50 days Kubernetes Learning Path. Advanced users might want to refer to Kubernetes best practices or the watch some of the videos for demos, top features and technical sessions.
Let’s start off by understanding the concept of a
Pod is the smallest possible abstraction in Kubernetes and it can have one or more containers running within it. These containers share resources (storage, volume) and can communicate with each other over
Create a simple
Pod using the YAML file below.
Podis just a Kubernetes resource or object. The YAML file is something that describes its desired state along with some basic information – it is also referred to as a
spec(shorthand for specification) or
|– name: nginx|
kubectl apply command to submit the
Pod information to Kubernetes.
To keep things simple, the YAML file is being referenced directly from the GitHub repo, but you can also download the file to your local machine and use it in the same way.
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-pod.yaml pod/kin-stateless-1 created $ kubectl get pods NAME READY STATUS RESTARTS AGE kin-stateless-1 1/1 Running 0 10s
This should work as expected. Now, let’s delete the
Pod and see what happens. For this, we will need to use
kubectl delete pod
$ kubectl delete pod kin-stateless-1 pod "kin-stateless-1" deleted $ kubectl get pods No resources found.
…. and just like that, the
Pod is gone!
For serious applications, you have to take care of the following aspects:
- High availability and resiliency — Ideally, your application should be robust enough to self-heal and remain available in face of failure e.g.
Poddeletion due to node failure, etc.
- Scalability — What if a single instance of your app (
Pod) does not suffice? Wouldn’t you want to run replicas/multiple instances?
Once you have multiple application instances running across the cluster, you will need to think about:
- Scale — Can you count on the underlying platform to handle horizontal scaling automatically?
- Accessing your application — How do clients (internal or external) reach your application and how is the traffic regulated across multiple instances (
- Upgrades — How can you handle application updates in a non-disruptive manner i.e. without downtime?
Enough about problems. Let’s look into some possible solutions!
Although it is possible to create
Pods directly, it makes sense to use higher-level components that Kubernetes provides on top of
Pods in order to solve the above mentioned problems. In simple words, these components (also called
Controllers) can create and manage a group of
The following controllers work in the context of
Pods and stateless apps:
There are other
DaemonSetetc. but they are not relevant to stateless apps, hence not discussed here
ReplicaSet can be used to ensure that a fixed number of replicas/instances of your application (
Pod) are always available. It identifies the group of
Pods that it needs to manage with the help of (user-defined) selector and orchestrates them (creates or deletes) to maintain the desired instance count.
Here is what a common
ReplicaSet spec looks like
|– name: nginx|
Let’s create the
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-replicaset.yaml replicaset.apps/kin-stateless-rs created $ kubectl get replicasets NAME DESIRED CURRENT READY AGE kin-stateless-rs 2 2 2 1m11s $ kubectl get pods --selector=app=kin-stateless-rs NAME READY STATUS RESTARTS AGE kin-stateless-rs-zn4p2 1/1 Running 0 13s kin-stateless-rs-zxp5d 1/1 Running 0 13s
ReplicaSet object (named
kin-stateless-rs) was created along with two
Pods (notice that the names of the
Pods contain a random alphanumeric string e.g.
This was as per what we had supplied in the YAML (spec):
spec.replicaswas set to
selector.matchLabelswas set to
app: kin-stateless-rsand matched the
.spec.template.metadata.labelsfield in the
Labels are simple key-value pairs which can be added to objects (such as a
Podin this case).
--selector in the
kubectl get command to filter the
Pods based on their labels which in this case was
Try deleting one of the
Pods (just like you did in the previous case)
Please note that the
Podname will be different in your case, so make sure you use the right one.
$ kubectl delete pod kin-stateless-rs-zxp5d pod "kin-stateless-rs-zxp5d" deleted $ kubectl get pods -l=app=kin-stateless-rs NAME READY STATUS RESTARTS AGE kin-stateless-rs-nghgk 1/1 Running 0 9s kin-stateless-rs-zn4p2 1/1 Running 0 5m
We still have two
Pods! This is because a new
Pod (highlighted) was created to satisfy the replica count (two) of the
To scale your application horizontally, all you need to do is update the
spec.replicas field in the manifest file and submit it again.
As an exercise, try scaling it up to five replicas and then going back to three.
So far so good! But this does not solve all the problems. One of them is handling application updates — specifically, in a way that does not require downtime. Kubernetes provides another component which works on top of
ReplicaSets to handle this and more.
Deployment is an abstraction which manages a
ReplicaSet — recall from the previous section, that a
ReplicaSet manages a group of Pods. In addition to elastic scalability,
Deployments provide other useful features that allow you to manage updates, rollback to a previous state, pause and resume the deployment process, etc. Let’s explore these.
Deployment borrows the following features from its underlying
- Resiliency — If a Pod crashes, it is automatically restarted, thanks to the
ReplicaSet. The only exception is when you set the
- Scaling — This is also taken care of by the underlying
This what a typical
Deployment spec looks like
|– name: nginx|
Deployment and see which Kubernetes objects get created
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-deployment.yaml deployment.apps/kin-stateless-depl created $ kubectl get deployment kin-stateless-dp NAME READY UP-TO-DATE AVAILABLE AGE kin-stateless-dp 2/2 2 2 10 $ kubectl get replicasets NAME DESIRED CURRENT READY AGE kin-stateless-dp-8f9b4d456 2 2 2 12 $ kubectl get pods -l=app=kin-stateless-dp NAME READY STATUS RESTARTS AGE kin-stateless-dp-8f9b4d456-csskb 1/1 Running 0 14s kin-stateless-dp-8f9b4d456-hhrj7 1/1 Running 0 14s
kin-stateless-dp) got created along with the
ReplicaSet and (two)
Pods as specified in the
spec.replicas field. Great! Now, let’s peek into the
Pod to see which
nginx version we’re using — please note that the
Pod name will be different in your case, so make sure you use the right one
$ kubectl exec kin-stateless-dp-8f9b4d456-csskb -- nginx -v nginx version: nginx/1.17.3
This is because the
latest tag of the
nginx image was picked up from DockerHub which happens to be
v1.17.3 at the time of writing.
kubectl exec? In simple words, it allows you to execute a command in specific container within a
Pod. In this case, our
Podhas a single container, so we don’t need to specify one
You can trigger an update to an existing
Deployment by modifying the template section of the
Pod spec — a common example being updating to a newer version (label) of a container image. You can specify it using
spec.strategy.type of the
Deployment manifest and valid options are –
Rolling update and
Rolling updates ensure that you don’t incur application downtime during the update process — this is because the update happens one
Pod at a time. There is a point in time where both the previous and current versions of the application co-exist. The old
Pods are deleted once the update is complete, but there will a phase where the total number of
Pods in your Deployment will be more than the specified
It is possible to further tune this behavior using the
spec.strategy.rollingUpdate.maxSurge— maximum no. of
Pods which can be created in addition to the specified replica count
spec.strategy.rollingUpdate.maxUnavailable— defines the maximum no. of
Pods which are not available
This is quite straightforward — the old set of Pods are deleted before the new versions are rolled out. You could have achieved the same results using
ReplicaSets by first deleting the old one and then creating a new one with the updated spec (e.g. new docker image etc.)
Let’s try and update the application by specifying an explicit Docker image tag — in this case, we’ll use
1.16.0. This means that once we update our app, this version should reflect when we introspect our
Deployment manifest above, update it to change
nginx:1.16.0 and submit it to the cluster – this will trigger an update
$ kubectl apply -f deployment.yaml deployment.apps/kin-stateless-dp configured $ kubectl get pods -l=app=kin-stateless-dp NAME READY STATUS RESTARTS AGE kin-stateless-dp-5b66475bd4-gvt4z 1/1 Running 0 49s kin-stateless-dp-5b66475bd4-tvfgl 1/1 Running 0 61s
You should now see a new set of
Pods (notice the names). To confirm the update:
$ kubectl exec kin-stateless-dp-5b66475bd4-gvt4z -- nginx -v nginx version: nginx/1.16.0
Please note that the
Podname will be different in your case, so make sure you use the right one
If things don’t go as expected with the current
Deployment, you can revert back to the previous version in case the new one is not working as expected. This is possible since Kubernetes stores the rollout history of a
Deployment in the form of revisions.
To check the history for the
$ kubectl rollout history deployment/kin-stateless-dp deployment.extensions/kin-stateless-dp REVISION CHANGE-CAUSE 1 2
Notice that there are two revisions, with
2 being the latest one. We can roll back to the previous one using
kubectl rollout undo
$ kubectl rollout undo deployment kin-stateless-dp deployment.extensions/kin-stateless-dp rolled back $ kubectl get pods -l=app=kin-stateless-dp NAME READY STATUS RESTARTS AGE kin-stateless-dp-5b66475bd4-gvt4z 0/1 Terminating 0 10m kin-stateless-dp-5b66475bd4-tvfgl 1/1 Terminating 0 10m kin-stateless-dp-8f9b4d456-d4v97 1/1 Running 0 14s kin-stateless-dp-8f9b4d456-mq7sb 1/1 Running 0 7s
Notice the intermediate state where Kubernetes was busy terminating the
Pods of the old
Deployment while making sure that new
Pods are created in response to the rollback request.
If you check the
nginx version again, you will see that the app has indeed been rolled back to
$ kubectl exec kin-stateless-dp-8f9b4d456-d4v97 -- nginx -v nginx version: nginx/1.17.3
Pause and Resume
It is also possible to pause a
Deployment rollout and resume it back after applying changes to it (during the paused state).
ReplicationController is similar to a
ReplicaSet. However, it is not a recommended approach for stateless app orchestration since a
Deployment offers a richer set of capabilities (as described in the previous section). You can read more about them in the Kubernetes documentation.
Check out Kubernetes documentation for the API details of the resources we discussed in this post i.e.
Stay tuned for more in the next part of the series!
I really hope you enjoyed and learned something from this article! Please like and follow if you did. Happy to get feedback via @abhi_tweeter or just drop a comment.