Kubernetes III. Next Step: Parenting
Deploying Sample App to Kubernetes Cluster
This is part three, the final part of my series of blog posts about Kubernetes. In the previous parts I have...
- in part 1, Love Story, told you some basic concepts of Kubernetes and,
- in part 2, Wedding Day, guided you through the installation and configuration of a Kubernetes cluster.
Now it’s time to get serious and test the strength of this love affair a little bit. Let's put some workload to Kubernetes cluster! In other words, we are going to deploy and manage a sample application.
Kubernetes has convenient ways to keep your apps running in the cloud. Deploying new versions will happen without downtimes, you can scale it up and down with simple commands. Also, what’s great, apps will heal themselves when errors occur.
We are going to use a GKE, Google Kubernetes Engine, as our k8s cluster. Notice that you can start using Google Cloud Platform for free, because you will receive $300 worth of credits when registering to GCP for the first time.
I will skip showing you how to create Kubernetes cluster in GKE, as you can read about it from GCP documentation: https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster.
Deploying the application
We are going to deploy a very simple web app to demonstrate how easy it is to get applications running in the Kubernetes cluster.
First, check that you have access to your cluster with kubectl get nodes
or kubectl cluster-info
. Should be something like this:
NAME STATUS ROLES AGE VERSION
gke-test-default-pool-da0a85ac-004t Ready <none> 6m v1.11.8-gke.6
gke-test-default-pool-da0a85ac-5rt4 Ready <none> 6m v1.11.8-gke.6
gke-test-default-pool-da0a85ac-5xx0 Ready <none> 6m v1.11.8-gke.6
Now we just simply deploy our application with simple run
command. We are going to use https://hub.docker.com/r/tutum/hello-world which is simple docker image showing that your web app is working.
kubectl run hello-world --image=tutum/hello-world --port=80
We can check our deployment with command:
kubectl describe deployment hello-world
Now we have our first workload running in Kubernetes. Nice!
Exposing application
Before we can access our app from the internet, we need to expose it.
There are different ways of exposing your app to the internet. This time we are going to use the LoadBalancer service, which works nicely when using GKE. It will automatically attach a public ip address to your deployment, which is very handy.
kubectl expose deployment hello-world --name hello-world-service-lb --type LoadBalancer --port 80 --target-port 80
Now check the services and wait for GCP to provision it the external ip address.
kubectl get services
First, it will be on pending state but soon you will get the ip.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world-service-lb LoadBalancer 10.11.248.49 <pending> 80:31853/TCP 7s
kubernetes ClusterIP 10.11.240.1 <none> 443/TCP 21m
After you get the external ip, copy and paste it to the browser. Now, you should see Hello World -page with your container hostname!
Scaling up & down
If you execute command kubectl get pods
you will see that there is only one pod running. What will happen when pod will crash? Yes, good guess, your application will go down. So we need to scale it up a little bit so that other pods will receive the traffic even though one is down.
kubectl scale deployment hello-world --replicas=5
Now check with kubectl get pods
that there are 5 pods running your application. You can also check kubectl get pods -o wide
and see that pods are scheduled to different worker nodes. Now if one of your worker nodes goes down, the application still remains responsive!
You can also refresh couple of times your browser (or open it again in Incognito window) and see if the container hostname is changing. Now you can see that also the load balancer is working.
You can also scale the app down if you want. The command for that is:
kubectl scale deployment hello-world --replicas=3
Self-healing
So what happens if your pod is crashing? Let’s find out!
First check the app status with kubectl get pods
. You can see some pods running there. Let’s delete one of them and see what happens.
kubectl delete pod hello-world-6f6c8bbf76-52c6w
Right after that check again kubectl get pods
a few times. You can see that Kubernetes will right away create a new pod for you.
Logging & monitoring
There are many ways to see how your Kubernetes cluster and apps inside it are performing. One powerful tool is describe
command. For example, check your deployment with:
kubectl describe deployment hello-world
You should see recent events at the bottom: there should be your scaling up and down commands.
If you want to check logs from your pod, the kubectl logs
command is the right way to do it. Check one your pods logs with command:
kubectl logs <container-name>
...and you will see that there is nothing, because the tutum/hello-world doesn’t log anything to stdout/stderr. But in real-world, you would see anything that containers are logging.
Btw: You should use some centralized logging and monitoring tool to keep your logs. Pods are ephemeral, so if your pod is deleted, also all the logs are gone. There are many good tools to use but I would recommend Stackdriver, as it is very well integrated with GKE.
Recap
So now you have read all three of my blog posts about Kubernetes. Thanks for reading! I hope you have enjoyed, maybe learned something new and ready to rock your own k8s cluster and put some workload to it!
If you have any questions or comments to me, you can contact me ville@montel.fi or via LinkedIn.
If you think Kubernetes might be the answer to your head aches but you don’t have time to dig more deeply into the amazing world of Kubernetes, don't hesitate to contact us. Tuba, tuba@montel.fi / +358 400636636 will be happy to tell more about our intergalactic services.