In this post, i would like to introduce you to Kubernetes which is the most popular container scheduler these days. I will show you how to install a few important tools that you should be aware of and use in your day to day operations while managing your Kubernetes cluster(s). I will not go into great details but just give you an idea on what each tool does. More details and advanced use will come in my next posts.
I will cover the following topics:
- Creating your own local Kubernetes installation with Minikube
- Administering your Kubernetes cluster(s) with kubectl
- Doing Continuous Delivery with Helm
I’m running all the commands below from my MacOS ( little differences on Linux, good luck on Windows )
First came Docker
To understand why, let’s first talk about Docker which is the most popular container solution these days.
When Docker first came around, it tackled a few issues such as:
- Portability
- Once your code is packaged in a container, it can be installed and run anywhere in the same way from your laptop to your Company infrastructure. No more will you hear the infamous “It works on my machine”TM ( ok well… at least we are getting closer to this goal with Docker ).
- Packaging
- Your application comes together with its dependencies and specific userspace to run it under. You can store your containers in and retrieve them from a registry. Technically installing a container is as easy as installing an .rpm/.deb or other packaging format.
Then came Kubernetes
Kubernetes was developed by Google and open sourced in 2014 as a solution to manage and distribute your containers workloads:
- Kubernetes was inspired by Borg which is Google internal cluster manager solution. Google has one of, if not the biggest infrastructure in the world and they have a reputation for advanced engineering.
- Kubernetes supports multiple providers such as AWS,GCP,Openstack,Azure and more This is one of the reasons why Kubernetes is beautiful for me.It would definitely be easier to move between different cloud providers when you already have all your stuff running inside containers in your Kubernetes cluster(s) although to be honest it’s also because you started using containers for your applications.
- It has a huge community behind it and an amazing and extensive documentation. It’s also very popular on the markets (job offers) and that’s always a good sign for me.
Install kubectl
kubectl is the tool that we use to administer our kubernetes cluster(s) from the cli:
$ curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.5.1/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Here comes the Minikube
We will first need to install docker-machine-driver-xhyve to be able to run Minikube natively on our MacOS without using VirtualBox: https://github.com/zchee/docker-machine-driver-xhyve
brew upgrade docker-machine-driver-xhyve
sudo chown root:wheel /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
sudo chmod u+s /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
Once we are done installing xhyve, we can go ahead and start Minikube locally
minikube start --vm-driver xhyve
minikube status
You should now have Minikube setup as the default context/cluster
kubectl get-contexts
You current cluster should now be set on your local Minikube
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
Now let’s verify the client/server versions
kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
We are ready to deploy our first container ( Nginx )
kubectl run nginx --image=nginx --port=80 --replicas=3
kubectl get all
NAME READY STATUS RESTARTS AGE
po/nginx-3449338310-43sk1 1/1 Running 0 1m
po/nginx-3449338310-hbp5s 1/1 Running 0 1m
po/nginx-3449338310-lz06n 1/1 Running 0 1m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 22m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/nginx 3 3 3 3 1m
NAME DESIRED CURRENT READY AGE
rs/nginx-3449338310 3 3 3 1m
You now have created a deployment containing a pod named nginx which itself uses the nginx Docker image It listens on port 80 and makes sure that 3 of these pods run at any time
Now let’s delete a container and see how Kubernetes handles this:
kubectl delete po/nginx-3449338310-lz06n
Ok that’s working as expected and now we are left with only 2/3 nginx containers.I’m expecting Kubernetes to do something about it because we defined our deployment to have 3 replicas
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3449338310-43sk1 1/1 Running 0 11m
nginx-3449338310-hbp5s 1/1 Running 0 11m
nginx-3449338310-q7rrp 1/1 Running 0 5s
Nice! As soon as we killed the container, we had Kubernetes restart another one as expected
Now for the record, if you want to scale down you can run the following command ( obviously also work for scale up ):
kubectl scale --current-replicas=3 --replicas=2 deployment/nginx
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3449338310-43sk1 1/1 Running 0 15m
nginx-3449338310-hbp5s 1/1 Running 0 15m
You can also connect to the Kubernetes dashboard as follows:
minikube dashboard
A nightmare on Helm street
Helm is an awesome tool used to install/upgrade your software defined inside its own chart. This is the tool that i used to do continuous delivery on Kubernetes in my last project.It basically has 2 parts:
- Helm which is the client-side tool which parse a chart directory, read your defined global and environment values and generate deployment/services/secrets/etc resources files and pass them to Tiller which is running inside your Kubernetes.
- tiller is itself the software which will run internal Kubernetes commands based on your defined resources.It will also keep tracks on the various Helm deployments which will allow you to do various operations such as upgrades/rollbacks/etc
I will now demonstrate a quick Jenkins master deployment via Helm.
First we need to install the Helm client on our laptop:
curl http://storage.googleapis.com/kubernetes-helm/helm-v2.1.3-darwin-amd64.tar.gz | tar -xzf - -C /usr/local/bin/ --strip-components=1 darwin-amd64/helm
Now we run Helm for the first time to create our initial local config and deploy Tiller inside Kubernetes:
helm init
Give it a good 30 seconds depending on your internet connection and run the following command:
helm version
You should get the following output:
Client: &version.Version{SemVer:"v2.1.3", GitCommit:"5cbc48fb305ca4bf68c26eb8d2a7eb363227e973", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.1.3", GitCommit:"5cbc48fb305ca4bf68c26eb8d2a7eb363227e973", GitTreeState:"clean"}
Let’s search for the Jenkins package to install:
helm search jenkins
NAME VERSION DESCRIPTION
stable/jenkins 0.1.8 A Jenkins Helm chart for Kubernetes.
Let’s install the chart inside Minikube:
helm install --set Persistence.StorageClass=default stable/jenkins
Notice the extra –set Persistence.StorageClass="" added which temporarily fixes the following bug:
https://github.com/kubernetes/charts/pull/530
This is the output of my installation:
NAME: ironic-donkey
LAST DEPLOYED: Tue Jan 31 06:40:50 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
ironic-donkey-jenkins Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
ironic-donkey-jenkins 1 1s
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ironic-donkey-jenkins 10.0.0.35 <pending> 8080:30822/TCP,50000:31757/TCP 1s
==> extensions/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ironic-donkey-jenkins 1 1 1 0 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ironic-donkey-jenkins Bound pvc-34d6771b-e780-11e6-a309-da2274021699 8Gi RWO 1s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default ironic-donkey-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w ironic-donkey-jenkins'
export SERVICE_IP=$(kubectl get svc ironic-donkey-jenkins --namespace default --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
We can see that the Jenkins chart defined some Kubernetes resources which are needed for the Jenkins master to run inside Kubernetes. Another cool feature of the chart packaging comes with the NOTES output which gives you useful informations on how to start using the deployed service.
Don’t forget to run step 1. from the NOTES as we will need the password to login as admin.
Let’s now grab the Jenkins service url:
minikube service ironic-donkey-jenkins --url
http://192.168.64.9:30822
http://192.168.64.9:31757
You will get 2 urls and you should use the first one which will be your master url ( the second being your agent ) Open your browser using the first url and you should be able to login as admin with the password that you grabbed above.
Have a look at the Jenkins chart’s github repository to have an idea of what a chart is made of.
Summary
You now have a locally installed Minikube which you can use to run your kubernetes tests before deploying it on your real cluster. While this post was simple and straightforward, deploying and maintaining Kubernetes clusters in different environments/clouds is not as simple and straightforward. Many companies already offer “Managed Kubernetes” solutions to offload the burden of having to do so but we are engineers and we also love to design/implement/troubleshoot/maintain our stuff. Expect more posts to cover some Kubernetes components in details as my contribution to this amazing community who helped me a lot in the past months.
Resources
Slack: http://slack.k8s.io/ Github: https://github.com/kubernetes
comments powered by Disqus