Friday, April 28, 2017

Kubernetes basics with demo

Now we have come to a point where we can deploy and manage our deployments through Kubernetes .. if you haven't read previous two posts please read them Docker + Microservices all in one and Create/Manage docker swarm cluster.

I will start the topic by covering some of the basic concepts and keywords in the kubernetes world .. for more detailed articles please refer to the kubernetes interactive tutorial here.

Kubernetes

  • Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way
  • A Kubernetes cluster consists of two types of resources:
    • The Master coordinates the cluster
    • Nodes are the workers that run applications


Kubernetes Node

  • A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster
  • Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master
  • The node should also have tools for handling container operations, such as Docker or rkt
  • A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

 

Kubernetes Master

  • Is responsible for managing the cluster.
  • The master coordinates all activities in your cluster, such as:
    • Scheduling applications
    • Maintaining applications' desired state
    • Scaling applications
    • Rolling out new updates

 

Kubernetes Deployment

  • The deployment is responsible for creating and updating instances of your application
  • Once you have created a deployment, the master schedules the application instances that the deployment creates onto individual nodes in the cluster
  •  Once the application instances are created, a kubernetes deployment controller continuously monitors those instances
  • If the node hosting an instance goes down or is deleted, the deployment controller replaces it .. this provides a self healing mechanism to address machine failure and maintenance

 

PODS

  • A POD is a kubernetes abstraction that represents a group of one or more application containers (such as docker, rkt), and some shared resources for those containers such as:

    • Shared storage, as volumes
    • Networking as a unique cluster IP address
    • Information about how to run each container, such as the container image version or specific ports to use

 

Summary

  • A Pod always runs on a Node
  • A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster
    Each Node is managed by the Master
  • A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster
  • The Master's automatic scheduling takes into account the available resources on each Node
  • Every Kubernetes Node runs at least:
    • Kubelet, a process responsible for communication between the Kubernetes Master and the Nodes; it manages the Pods and the containers running on a machine
    • A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application
  • Kubernetes Pods are mortal. Pods in fact have a lifecycle
  • When a worker node dies, the Pods running on the Node are also lost. 
  • A ReplicationController might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running
  • Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime

Note: Most of the theoretical content above is covered in Kubernetes interactive tutorial .. I just formated it in a better way and removed most of the unnecessary details for the purpose of this topic ..

 

Demo


To make things easier for me :P .. I will use my google cloud dev machine which has Kubernetes installed and managing the cluster already "It has only one node".

You can easily install minikube to be able to play with the examples covered here on your local machine .. please refer to this article.
   
Step 1 - Configure Kube deployment/service:


Service.yaml and deployment.yaml for each service .. you can also include all of them into one template .. soon I will write about a very nice tool to template your configuration called Helm.


Deployment.yaml -- rest-service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rest-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rest-service
  template:
    metadata:
      name: rest-service
      labels:
        app: rest-service
    spec:
      containers:
        - name: rest-service
          image: docker.io/husseincoder/rest-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080

Service.yaml -- rest-service

apiVersion: v1
kind: Service
metadata:
  name: rest-service
spec:
  ports:
  - port: 8080
  selector:
    app: rest-service
  type: NodePort

Deployment.yaml -- grpc-service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grpc-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-service
  template:
    metadata:
      name: grpc-service
      labels:
        app: grpc-service
    spec:
      containers:
        - name: grpc-service
          image: docker.io/husseincoder/grpc-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5000
            - containerPort: 5001

Service.yaml -- grpc-service

apiVersion: v1
kind: Service
metadata:
  name: grpc-service
spec:
  ports:
  - port: 5000
    name: passwords-endpoint
  - port: 5001
    name: health-endpoint
  selector:
    app: grpc-service
  type: NodePort

Step 2 - Create deployment and service:

se7so@se7so:~/rest-service/kube$ kubectl create -f .
deployment "rest-service" created
service "rest-service" created

se7so@se7so:~/grpc-service/kube$ kubectl create -f .
deployment "grpc-service" created
service "grpc-service" created

Lets test our deployments/PODS/Services that we have just created.

Step 3 - Show Deployments/PODS/Services:
 
se7so@se7so:~/$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
grpc-service   1         1         1            1           2m
heapster       1         1         1            1           10d
nginx          1         1         1            1           10d
rest-service   1         1         1            1           2m
se7so@se7so:~$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
docker-gc-0hj9f                 1/1       Running   6          10d
grpc-service-1580463372-5hhwg   1/1       Running   0          2m
heapster-3746328914-mgwhr       1/1       Running   0          11h
ingress-lb-7s074                1/1       Running   6          10d
nginx-3110227365-44znv          1/1       Running   0          11h
rest-service-3837907542-kh3xt   1/1       Running   0          2m
se7so@se7so:~/$ kubectl get services
NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)                         AGE
graphite       None         <none>        2003/TCP                        10d
grpc-service   10.0.0.49    <nodes>       5000:32380/TCP,5001:32534/TCP   2m
kubernetes     10.0.0.1     <none>        443/TCP                         10d
logger         None         <none>        514/TCP                         10d
nginx          10.0.0.96    <nodes>       80:30001/TCP                    10d
proxy          None         <none>        3128/TCP                        10d
rest-service   10.0.0.225   <nodes>       8080:31326/TCP                  2m
se7so@se7so:~/IdeaProjects/dockerized-microservices/grpc-service/kube$ 
 
You can see I have other stuff running there .. but for now we are interested in those bold lines.

Notice in the service part .. for both grpc-service and rest-service the internal ports are mapped to external ones .. grpc-service ports 5000, 5001 are mapped to external ports 32380, and 32534 .. rest-service port 8080 is mapped to port 31326 which we shall use now to access our rest service.

Step 4 - Try it out:

se7so@se7so:~/$ curl http://10.132.0.117:31326/health && echo
{"status":"Running","dictSize":4758252}
se7so@se7so:~/$ curl http://10.132.0.117:31326/passwords?q=abc && echo
{"totalMatches":1268,"matches":["abcc","abccz","abcczyx","abccz911","abcczxy","abcczxy1","abccz247","abccymas","abccyes","abccyuki"]}

Step 5 - Scaling a deployment:


Lets scale our rest-service to have 2 replicas:

se7so@se7so:~/$ kubectl scale deployment rest-service --replicas=2
deployment "rest-service" scaled
se7so@se7so:~/IdeaProjects/dockerized-microservices/grpc-service/kube$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
grpc-service   1         1         1            1           10m
heapster       1         1         1            1           10d
nginx          1         1         1            1           10d
rest-service   2         2         2            2           10m
se7so@se7so:~/$ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
docker-gc-0hj9f                 1/1       Running   6          10d
grpc-service-1580463372-5hhwg   1/1       Running   0          10m
heapster-3746328914-mgwhr       1/1       Running   0          11h
ingress-lb-7s074                1/1       Running   6          10d
nginx-3110227365-44znv          1/1       Running   0          11h
rest-service-3837907542-kh3xt   1/1       Running   0          10m
rest-service-3837907542-rchms   1/1       Running   0          10s
se7so@se7so:~/IdeaProjects/dockerized-microservices/grpc-service/ku
 
As you can see in the first show deployments command .. the deployment is now has 2 replicas .. and each one is represented in a separate POD .. 

The rest-service will take care of load balancing them and making sure the desired number of replicas will be running even if something wrong happens it will ask the master to recreate it.

Step 6 - check logs of a POD:

Since we have only one container so we can easily check the logs of the POD by issuing the following command.

se7so@se7so:~$ kubectl logs -f rest-service-3837907542-kh3xt

Step 7 - Describe deployment/pod/service:

In case you want see the details of one of your deployments/pods/services you can issue the following command:

se7so@se7so:~$ kubectl describe service rest-service
Name:   rest-service
Namespace:  default
Labels:   <none>
Selector:  app=rest-service
Type:   NodePort
IP:   10.0.0.225
Port:   <unset> 8080/TCP
NodePort:  <unset> 31326/TCP
Endpoints:  172.17.0.2:8080,172.17.0.8:8080
Session Affinity: None
No events.


se7so@se7so:~$ kubectl describe deployment rest-service
Name:   rest-service
Namespace:  default
CreationTimestamp: Fri, 28 Apr 2017 21:33:18 +0200
Labels:   app=rest-service
Selector:  app=rest-service
Replicas:  2 updated | 2 total | 2 available | 0 unavailable
StrategyType:  RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Conditions:
  Type  Status Reason
  ----  ------ ------
  Available  True MinimumReplicasAvailable
OldReplicaSets: 
NewReplicaSet: rest-service-3837907542 (2/2 replicas created)
Events:
  FirstSeen LastSeen Count From    SubObjectPath Type  Reason   Message
  --------- -------- ----- ----    ------------- -------- ------   -------
  16m  16m  1 {deployment-controller }   Normal  ScalingReplicaSet Scaled up replica set rest-service-3837907542 to 1
  6m  6m  1 {deployment-controller }   Normal  ScalingReplicaSet Scaled up replica set rest-service-3837907542 to 2


Step 8 - Rolling an update:



Once you change your configuration to point to the new image or a new version of it .. you just issue the following command which is overrides what you have done in step 2 - without downtime ;).


se7so@se7so:~$ kubectl apply -f .


Step 9 - Open bash session on GRPC Service Container:



YES .. you can ssh to one of your containers and have full access to what's in there ..


se7so@se7so:~$ kubectl exec grpc-service-1580463372-5hhwg ls /home
app.jar
rockyou.txt
se7so@se7so:~$ kubectl exec -it grpc-service-1580463372-5hhwg bash
root@grpc-service-1580463372-5hhwg:/# ls /home
app.jar  rockyou.txt
root@grpc-service-1580463372-5hhwg:/# ps -a
  PID TTY          TIME CMD
  107 ?        00:00:00 ps
root@grpc-service-1580463372-5hhwg:/# 

Super nice hah?! ;)

I think I have covered most of the important stuff .. but everyday I use this thing I discover more cool stuff .. please follow this series for more stuff like these.

Please contribute to this offer here.