Using kops to build your kubernetes cluster

How to build your first kubernetes cluster on AWS

kops is a tool used to deploy Kubernetes clusters on GCE/AWS that has recently gained more popularity. It’s quickly being developed and keeping up with the latest Kubernetes updates/features. There currently is only support for AWS/GCE but more platforms are planned to be supported.

I will cover the following topics:

  • Creating your own AWS Kubernetes installation with kops
  • A look at what exactly is kops creating/storing inside its s3 bucket
  • Adding a new worker node inside your cluster

You will need to have an existing AWS account with Access Keys setup. You will also need to install kubectl: https://kubernetes.io/docs/user-guide/prereqs/

Install kops

curl -Lo kops-darwin-amd64 https://github.com/kubernetes/kops/releases/download/1.5.1/kops-darwin-amd64 && chmod +x kops-darwin-amd64 && sudo mv kops-darwin-amd64 /usr/local/bin/kops

Create your Cluster Spec

2 environment variables to export before we are ready to create our first Kubernetes cluster:

export KOPS_STATE_STORE=s3://crisci-kops-state
export NAME=k8s.infradev.io

The following command will create a cluster with default settings ( image/networking/instances sizes/etc ) Notice the –admin-access “$(curl -s icanhazip.com)/32” which will limit ssh/api-server connections to your source ip and not the default 0.0.0.0/0 so you don’t end up listed on shodan.io

kops create cluster --admin-access "$(curl -s icanhazip.com)/32" --cloud aws --zones eu-west-1a --node-count 1 --ssh-public-key ~/.ssh/kops_rsa.pub ${NAME}
I0227 11:02:38.945869   41048 create_cluster.go:610] Using SSH public key: /Users/lcrisci/.ssh/kops_rsa.pub
I0227 11:02:39.658966   41048 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet eu-west-1a
Previewing changes that will be made:

I0227 11:02:45.089285   41048 executor.go:91] Tasks: 0 done / 55 total; 27 can run
I0227 11:02:46.462817   41048 executor.go:91] Tasks: 27 done / 55 total; 12 can run
I0227 11:02:46.782909   41048 executor.go:91] Tasks: 39 done / 55 total; 14 can run
I0227 11:02:47.226994   41048 executor.go:91] Tasks: 53 done / 55 total; 2 can run
I0227 11:02:47.331107   41048 executor.go:91] Tasks: 55 done / 55 total; 0 can run
Will create resources:
  AutoscalingGroup/master-eu-west-1a.masters.k8s.infradev.io
  	MinSize             	1
  	MaxSize             	1
  	Subnets             	[name:eu-west-1a.k8s.infradev.io]
  	Tags                	{k8s.io/role/master: 1, Name: master-eu-west-1a.masters.k8s.infradev.io, KubernetesCluster: k8s.infradev.io}
  	LaunchConfiguration 	name:master-eu-west-1a.masters.k8s.infradev.io

  AutoscalingGroup/nodes.k8s.infradev.io
  	MinSize             	1
  	MaxSize             	1
  	Subnets             	[name:eu-west-1a.k8s.infradev.io]
  	Tags                	{k8s.io/role/node: 1, Name: nodes.k8s.infradev.io, KubernetesCluster: k8s.infradev.io}
  	LaunchConfiguration 	name:nodes.k8s.infradev.io

  DHCPOptions/k8s.infradev.io
  	DomainName          	eu-west-1.compute.internal
  	DomainNameServers   	AmazonProvidedDNS

  EBSVolume/a.etcd-events.k8s.infradev.io
  	AvailabilityZone    	eu-west-1a
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{Name: a.etcd-events.k8s.infradev.io, KubernetesCluster: k8s.infradev.io, k8s.io/etcd/events: a/a, k8s.io/role/master: 1}

  EBSVolume/a.etcd-main.k8s.infradev.io
  	AvailabilityZone    	eu-west-1a
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{k8s.io/etcd/main: a/a, k8s.io/role/master: 1, Name: a.etcd-main.k8s.infradev.io, KubernetesCluster: k8s.infradev.io}

  IAMInstanceProfile/masters.k8s.infradev.io

  IAMInstanceProfile/nodes.k8s.infradev.io

  IAMInstanceProfileRole/masters.k8s.infradev.io
  	InstanceProfile     	name:masters.k8s.infradev.io id:masters.k8s.infradev.io
  	Role                	name:masters.k8s.infradev.io

  IAMInstanceProfileRole/nodes.k8s.infradev.io
  	InstanceProfile     	name:nodes.k8s.infradev.io id:nodes.k8s.infradev.io
  	Role                	name:nodes.k8s.infradev.io

  IAMRole/masters.k8s.infradev.io

  IAMRole/nodes.k8s.infradev.io

  IAMRolePolicy/additional.masters.k8s.infradev.io
  	Role                	name:masters.k8s.infradev.io

  IAMRolePolicy/additional.nodes.k8s.infradev.io
  	Role                	name:nodes.k8s.infradev.io

  IAMRolePolicy/masters.k8s.infradev.io
  	Role                	name:masters.k8s.infradev.io

  IAMRolePolicy/nodes.k8s.infradev.io
  	Role                	name:nodes.k8s.infradev.io

  InternetGateway/k8s.infradev.io
  	VPC                 	name:k8s.infradev.io
  	Shared              	false

  Keypair/kubecfg
  	Subject             	cn=kubecfg
  	Type                	client

  Keypair/kubelet
  	Subject             	cn=kubelet
  	Type                	client

  Keypair/master
  	Subject             	cn=kubernetes-master
  	Type                	server
  	AlternateNames      	[100.64.0.1, api.internal.k8s.infradev.io, api.k8s.infradev.io, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]

  LaunchConfiguration/master-eu-west-1a.masters.k8s.infradev.io
  	ImageID             	kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
  	InstanceType        	m3.medium
  	SSHKey              	name:kubernetes.k8s.infradev.io-e3:6b:c5:df:a1:e3:ad:fe:16:6e:4d:10:35:dd:67:11 id:kubernetes.k8s.infradev.io-e3:6b:c5:df:a1:e3:ad:fe:16:6e:4d:10:35:dd:67:11
  	SecurityGroups      	[name:masters.k8s.infradev.io]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:masters.k8s.infradev.io id:masters.k8s.infradev.io
  	RootVolumeSize      	20
  	RootVolumeType      	gp2
  	SpotPrice

  LaunchConfiguration/nodes.k8s.infradev.io
  	ImageID             	kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
  	InstanceType        	t2.medium
  	SSHKey              	name:kubernetes.k8s.infradev.io-e3:6b:c5:df:a1:e3:ad:fe:16:6e:4d:10:35:dd:67:11 id:kubernetes.k8s.infradev.io-e3:6b:c5:df:a1:e3:ad:fe:16:6e:4d:10:35:dd:67:11
  	SecurityGroups      	[name:nodes.k8s.infradev.io]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:nodes.k8s.infradev.io id:nodes.k8s.infradev.io
  	RootVolumeSize      	20
  	RootVolumeType      	gp2
  	SpotPrice

  ManagedFile/k8s.infradev.io-addons-bootstrap
  	Location            	addons/bootstrap-channel.yaml

  ManagedFile/k8s.infradev.io-addons-core.addons.k8s.io
  	Location            	addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/k8s.infradev.io-addons-dns-controller.addons.k8s.io
  	Location            	addons/dns-controller.addons.k8s.io/v1.5.1.yaml

  ManagedFile/k8s.infradev.io-addons-kube-dns.addons.k8s.io
  	Location            	addons/kube-dns.addons.k8s.io/v1.5.1.yaml

  ManagedFile/k8s.infradev.io-addons-limit-range.addons.k8s.io
  	Location            	addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/k8s.infradev.io-addons-storage-aws.addons.k8s.io
  	Location            	addons/storage-aws.addons.k8s.io/v1.5.0.yaml

  Route/0.0.0.0/0
  	RouteTable          	name:k8s.infradev.io
  	CIDR                	0.0.0.0/0
  	InternetGateway     	name:k8s.infradev.io

  RouteTable/k8s.infradev.io
  	VPC                 	name:k8s.infradev.io

  RouteTableAssociation/eu-west-1a.k8s.infradev.io
  	RouteTable          	name:k8s.infradev.io
  	Subnet              	name:eu-west-1a.k8s.infradev.io

  SSHKey/kubernetes.k8s.infradev.io-e3:6b:c5:df:a1:e3:ad:fe:16:6e:4d:10:35:dd:67:11
  	KeyFingerprint      	b9:09:b6:4c:49:ac:ce:cb:5a:f2:a5:79:b6:2e:e7:65

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system-controller_manager

  Secret/system-dns

  Secret/system-logging

  Secret/system-monitoring

  Secret/system-scheduler

  SecurityGroup/masters.k8s.infradev.io
  	Description         	Security group for masters
  	VPC                 	name:k8s.infradev.io
  	RemoveExtraRules    	[port=22, port=443, port=4001, port=4789, port=179]

  SecurityGroup/nodes.k8s.infradev.io
  	Description         	Security group for nodes
  	VPC                 	name:k8s.infradev.io
  	RemoveExtraRules    	[port=22]

  SecurityGroupRule/all-master-to-master
  	SecurityGroup       	name:masters.k8s.infradev.io
  	SourceGroup         	name:masters.k8s.infradev.io

  SecurityGroupRule/all-master-to-node
  	SecurityGroup       	name:nodes.k8s.infradev.io
  	SourceGroup         	name:masters.k8s.infradev.io

  SecurityGroupRule/all-node-to-node
  	SecurityGroup       	name:nodes.k8s.infradev.io
  	SourceGroup         	name:nodes.k8s.infradev.io

  SecurityGroupRule/https-external-to-master-x.x.x.x/32
  	SecurityGroup       	name:masters.k8s.infradev.io
  	CIDR                	x.x.x.x/32
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443

  SecurityGroupRule/master-egress
  	SecurityGroup       	name:masters.k8s.infradev.io
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-egress
  	SecurityGroup       	name:nodes.k8s.infradev.io
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-to-master-tcp-4194
  	SecurityGroup       	name:masters.k8s.infradev.io
  	Protocol            	tcp
  	FromPort            	4194
  	ToPort              	4194
  	SourceGroup         	name:nodes.k8s.infradev.io

  SecurityGroupRule/node-to-master-tcp-443
  	SecurityGroup       	name:masters.k8s.infradev.io
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443
  	SourceGroup         	name:nodes.k8s.infradev.io

  SecurityGroupRule/ssh-external-to-master-x.x.x.x/32
  	SecurityGroup       	name:masters.k8s.infradev.io
  	CIDR                	x.x.x.x/32
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  SecurityGroupRule/ssh-external-to-node-x.x.x.x/32
  	SecurityGroup       	name:nodes.k8s.infradev.io
  	CIDR                	x.x.x.x/32
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  Subnet/eu-west-1a.k8s.infradev.io
  	VPC                 	name:k8s.infradev.io
  	AvailabilityZone    	eu-west-1a
  	CIDR                	172.20.32.0/19
  	Shared              	false

  VPC/k8s.infradev.io
  	CIDR                	172.20.0.0/16
  	EnableDNSHostnames  	true
  	EnableDNSSupport    	true
  	Shared              	false

  VPCDHCPOptionsAssociation/k8s.infradev.io
  	VPC                 	name:k8s.infradev.io
  	DHCPOptions         	name:k8s.infradev.io

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster k8s.infradev.io
 * edit your node instance group: kops edit ig --name=k8s.infradev.io nodes
 * edit your master instance group: kops edit ig --name=k8s.infradev.io master-eu-west-1a

Finally configure your cluster with: kops update cluster k8s.infradev.io --yes

As you can see from the output above, kops will create and manage the following AWS resources for your new Kubernetes cluster:

  1. AWS
  • AutoscalingGroups
  • DHCPOptions
  • EBSVolumes
  • IAMInstanceProfiles
  • IAMInstanceProfileRoles
  • IAMRoles
  • IAMRolePolicies
  • InternetGateway
  • LaunchConfigurations
  • Route
  • RouteTable
  • RouteTableAssociation
  • SecurityGroups
  • Subnet
  • VPC
  • VPCDHCPOptionsAssociation
  1. Kubernetes
  • Keypairs
  • ManagedFiles
  • SSHKey
  • Secrets

Let’s have a look at the files generated by our previous command:

aws s3 ls crisci-kops-state/k8s.infradev.io/ --recursive

2017-02-27 11:02:43       4659 k8s.infradev.io/cluster.spec
2017-02-27 11:02:42        837 k8s.infradev.io/config
2017-02-27 11:02:43        288 k8s.infradev.io/instancegroup/master-eu-west-1a
2017-02-27 11:02:43        274 k8s.infradev.io/instancegroup/nodes
2017-02-27 11:02:43        416 k8s.infradev.io/pki/ssh/public/admin/e36bc5dfa1e3adfe166e4d1035dd6711

One of them is the cluster.spec which contains the whole details about the cluster that will be created:

aws s3 cp s3://crisci-kops-state/k8s.infradev.io/cluster.spec -
metadata:
  Annotations: null
  ClusterName: ""
  CreationTimestamp: null
  DeletionGracePeriodSeconds: null
  DeletionTimestamp: null
  Finalizers: null
  GenerateName: ""
  Generation: 0
  Labels: null
  Name: k8s.infradev.io
  Namespace: ""
  OwnerReferences: null
  ResourceVersion: ""
  SelfLink: ""
  UID: ""
spec:
  api:
    dns: {}
  channel: stable
  cloudProvider: aws
  clusterDNSDomain: cluster.local
  configBase: s3://crisci-kops-state/k8s.infradev.io
  configStore: s3://crisci-kops-state/k8s.infradev.io
  dnsZone: Z33Q1172C7EFWF
  docker:
    bridge: ""
    ipMasq: false
    ipTables: false
    logLevel: warn
    storage: overlay,aufs
    version: 1.12.3
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    name: events
  keyStore: s3://crisci-kops-state/k8s.infradev.io/pki
  kubeAPIServer:
    address: 127.0.0.1
    admissionControl:
    - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - PersistentVolumeLabel
    - DefaultStorageClass
    - ResourceQuota
    allowPrivileged: true
    anonymousAuth: false
    apiServerCount: 1
    basicAuthFile: /srv/kubernetes/basic_auth.csv
    clientCAFile: /srv/kubernetes/ca.crt
    cloudProvider: aws
    etcdServers:
    - http://127.0.0.1:4001
    etcdServersOverrides:
    - /events#http://127.0.0.1:4002
    image: gcr.io/google_containers/kube-apiserver:v1.5.2
    kubeletPreferredAddressTypes:
    - InternalIP
    - Hostname
    - ExternalIP
    - LegacyHostIP
    logLevel: 2
    pathSrvKubernetes: /srv/kubernetes
    pathSrvSshproxy: /srv/sshproxy
    securePort: 443
    serviceClusterIPRange: 100.64.0.0/13
    storageBackend: etcd2
    tlsCertFile: /srv/kubernetes/server.cert
    tlsPrivateKeyFile: /srv/kubernetes/server.key
    tokenAuthFile: /srv/kubernetes/known_tokens.csv
  kubeControllerManager:
    allocateNodeCIDRs: true
    attachDetachReconcileSyncPeriod: 1m0s
    cloudProvider: aws
    clusterCIDR: 100.96.0.0/11
    clusterName: k8s.infradev.io
    configureCloudRoutes: true
    image: gcr.io/google_containers/kube-controller-manager:v1.5.2
    leaderElection:
      leaderElect: true
    logLevel: 2
    master: 127.0.0.1:8080
    pathSrvKubernetes: /srv/kubernetes
    rootCAFile: /srv/kubernetes/ca.crt
    serviceAccountPrivateKeyFile: /srv/kubernetes/server.key
  kubeDNS:
    domain: cluster.local
    image: gcr.io/google_containers/kubedns-amd64:1.3
    replicas: 2
    serverIP: 100.64.0.10
  kubeProxy:
    cpuRequest: 100m
    image: gcr.io/google_containers/kube-proxy:v1.5.2
    logLevel: 2
    master: https://api.internal.k8s.infradev.io
  kubeScheduler:
    image: gcr.io/google_containers/kube-scheduler:v1.5.2
    leaderElection:
      leaderElect: true
    logLevel: 2
    master: 127.0.0.1:8080
  kubelet:
    allowPrivileged: true
    apiServers: https://api.internal.k8s.infradev.io
    babysitDaemons: true
    cgroupRoot: docker
    cloudProvider: aws
    clusterDNS: 100.64.0.10
    clusterDomain: cluster.local
    enableDebuggingHandlers: true
    evictionHard: memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5%
    evictionPressureTransitionPeriod: 0s
    hostnameOverride: '@aws'
    logLevel: 2
    networkPluginMTU: 9001
    networkPluginName: kubenet
    nonMasqueradeCIDR: 100.64.0.0/10
    podManifestPath: /etc/kubernetes/manifests
  kubernetesApiAccess:
  - x.x.x.x/32
  kubernetesVersion: 1.5.2
  masterInternalName: api.internal.k8s.infradev.io
  masterKubelet:
    allowPrivileged: true
    apiServers: http://127.0.0.1:8080
    babysitDaemons: true
    cgroupRoot: docker
    cloudProvider: aws
    clusterDNS: 100.64.0.10
    clusterDomain: cluster.local
    enableDebuggingHandlers: true
    evictionHard: memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<10%,imagefs.inodesFree<5%
    evictionPressureTransitionPeriod: 0s
    hostnameOverride: '@aws'
    logLevel: 2
    networkPluginMTU: 9001
    networkPluginName: kubenet
    nonMasqueradeCIDR: 100.64.0.0/10
    podManifestPath: /etc/kubernetes/manifests
    registerSchedulable: false
  masterPublicName: api.k8s.infradev.io
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  secretStore: s3://crisci-kops-state/k8s.infradev.io/secrets
  serviceClusterIPRange: 100.64.0.0/13
  sshAccess:
  - x.x.x.x/32
  subnets:
  - cidr: 172.20.32.0/19
    name: eu-west-1a
    type: Public
    zone: eu-west-1a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

Update the cluster

We now need to apply all these clusters settings created earlier and stored inside our S3 bucket:

kops update cluster ${NAME} --yes
I0227 12:52:22.377172   41877 executor.go:91] Tasks: 0 done / 55 total; 27 can run
I0227 12:52:23.892835   41877 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0227 12:52:23.979435   41877 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0227 12:52:24.453922   41877 vfs_castore.go:422] Issuing new certificate: "master"
I0227 12:52:26.781553   41877 executor.go:91] Tasks: 27 done / 55 total; 12 can run
I0227 12:52:28.123472   41877 executor.go:91] Tasks: 39 done / 55 total; 14 can run
I0227 12:52:30.072841   41877 launchconfiguration.go:302] waiting for IAM instance profile "nodes.k8s.infradev.io" to be ready
I0227 12:52:30.112320   41877 launchconfiguration.go:302] waiting for IAM instance profile "masters.k8s.infradev.io" to be ready
I0227 12:52:40.911275   41877 executor.go:91] Tasks: 53 done / 55 total; 2 can run
I0227 12:52:41.401692   41877 executor.go:91] Tasks: 55 done / 55 total; 0 can run
I0227 12:52:41.401765   41877 dns.go:140] Pre-creating DNS records
I0227 12:52:46.657056   41877 update_cluster.go:204] Exporting kubecfg for cluster
Wrote config for k8s.infradev.io to "/Users/lcrisci/.kube/config"
Kops has set your kubectl context to k8s.infradev.io

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8s.infradev.io
 * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md

Validate the cluster

kops validate cluster
Using cluster from kubectl context: k8s.infradev.io

Validating cluster k8s.infradev.io


cannot get nodes for "k8s.infradev.io": Get https://api.k8s.infradev.io/api/v1/nodes: dial tcp 203.0.113.123:443: i/o timeout

Our cluster is not yet ready and is still being created ( the api server is not responding yet ). 203.0.113.123 is a placeholder ip part of the TEST-NET which kops is using as a dummy ip until the master(s) and it’s node(s) bootstrap properly and update their own ips

This is the output that we get after rerunning our validate command 5 minutes later:

Using cluster from kubectl context: k8s.infradev.io

Validating cluster k8s.infradev.io

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	m3.medium	1	1	eu-west-1a
nodes			Node	t2.medium	1	1	eu-west-1a

NODE STATUS
NAME						ROLE	READY
ip-172-20-37-30.eu-west-1.compute.internal	node	True
ip-172-20-63-122.eu-west-1.compute.internal	master	True

Your cluster k8s.infradev.io is ready

Let’s have a look at the state of our s3 bucket

aws s3 ls crisci-kops-state/k8s.infradev.io/ --recursive
2017-02-27 12:52:24        882 k8s.infradev.io/addons/bootstrap-channel.yaml
2017-02-27 12:52:24         61 k8s.infradev.io/addons/core.addons.k8s.io/v1.4.0.yaml
2017-02-27 12:52:24       1152 k8s.infradev.io/addons/dns-controller.addons.k8s.io/v1.5.1.yaml
2017-02-27 12:52:25       6977 k8s.infradev.io/addons/kube-dns.addons.k8s.io/v1.5.1.yaml
2017-02-27 12:52:24        166 k8s.infradev.io/addons/limit-range.addons.k8s.io/v1.5.0.yaml
2017-02-27 12:52:24        309 k8s.infradev.io/addons/storage-aws.addons.k8s.io/v1.5.0.yaml
2017-02-27 12:52:23       4677 k8s.infradev.io/cluster.spec
2017-02-27 11:02:42        837 k8s.infradev.io/config
2017-02-27 12:52:23        339 k8s.infradev.io/instancegroup/master-eu-west-1a
2017-02-27 12:52:23        325 k8s.infradev.io/instancegroup/nodes
2017-02-27 12:52:26       1046 k8s.infradev.io/pki/issued/ca/6391754630877149692432351200.crt
2017-02-27 12:52:27       1062 k8s.infradev.io/pki/issued/kubecfg/6391754626389335158749102511.crt
2017-02-27 12:52:27       1062 k8s.infradev.io/pki/issued/kubelet/6391754626058396578042878673.crt
2017-02-27 12:52:27       1302 k8s.infradev.io/pki/issued/master/6391754626239010186701887277.crt
2017-02-27 12:52:26       1675 k8s.infradev.io/pki/private/ca/6391754630877149692432351200.key
2017-02-27 12:52:26       1675 k8s.infradev.io/pki/private/kubecfg/6391754626389335158749102511.key
2017-02-27 12:52:26       1679 k8s.infradev.io/pki/private/kubelet/6391754626058396578042878673.key
2017-02-27 12:52:26       1675 k8s.infradev.io/pki/private/master/6391754626239010186701887277.key
2017-02-27 11:02:43        416 k8s.infradev.io/pki/ssh/public/admin/e36bc5dfa1e3adfe166e4d1035dd6711
2017-02-27 12:52:27         55 k8s.infradev.io/secrets/admin
2017-02-27 12:52:26         55 k8s.infradev.io/secrets/kube
2017-02-27 12:52:25         55 k8s.infradev.io/secrets/kube-proxy
2017-02-27 12:52:27         55 k8s.infradev.io/secrets/kubelet
2017-02-27 12:52:27         55 k8s.infradev.io/secrets/system:controller_manager
2017-02-27 12:52:26         55 k8s.infradev.io/secrets/system:dns
2017-02-27 12:52:26         55 k8s.infradev.io/secrets/system:logging
2017-02-27 12:52:27         55 k8s.infradev.io/secrets/system:monitoring
2017-02-27 12:52:26         55 k8s.infradev.io/secrets/system:scheduler

You will notice that new directories and their files were created by the update step:

  • k8s.infradev.io/secrets/ contains a list of tokens generated per user/service

  • k8s.infradev.io/pki/ contains issued client certificates and private keys

  • k8s.infradev.io/addons/ contains addons installed by kops

kubectl on the mic

We should be pointing to our new cluster:

kubectl config get-contexts
CURRENT   NAME              CLUSTER           AUTHINFO          NAMESPACE
*         k8s.infradev.io   k8s.infradev.io   k8s.infradev.io
          minikube          minikube          minikube
kubectl cluster-info
Kubernetes master is running at https://api.k8s.infradev.io
KubeDNS is running at https://api.k8s.infradev.io/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

A quick deployment to test our setup:

kubectl run nginx --image=nginx --port=80
kubectl get all
NAME                        READY     STATUS    RESTARTS   AGE
po/nginx-3449338310-bxbjb   1/1       Running   0          50s

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   100.64.0.1   <none>        443/TCP   3m

NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx   1         1         1            1           50s

NAME                  DESIRED   CURRENT   READY     AGE
rs/nginx-3449338310   1         1         1         50s

Let’s forward the pod container locally and test our container:

kubectl port-forward nginx-3449338310-bxbjb 8081:80

In another terminal, type the following:

curl -I localhost:8081

You should get the following output ( a 200 OK )

HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Tue, 28 Feb 2017 09:35:01 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Feb 2017 15:36:04 GMT
Connection: keep-alive
ETag: "58a323e4-264"
Accept-Ranges: bytes

It’s your turn kops

Let’s have a look at our current instancegroups:

kops get ig
Using cluster from kubectl context: k8s.infradev.io

NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	m3.medium	1	1	eu-west-1a
nodes			Node	t2.medium	1	1	eu-west-1a

Adding a new node inside our cluster

kops edit ig nodes

You should get a similar view

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-02-28T08:08:10Z"
  labels:
    kops.k8s.io/cluster: k8s.infradev.io
  name: nodes
spec:
  image: kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-west-1a

We will simply need to +1 the min/max Size entries before we can update our cluster with an extra node:

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-02-28T08:08:10Z"
  labels:
    kops.k8s.io/cluster: k8s.infradev.io
  name: nodes
spec:
  image: kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
  machineType: t2.medium
  maxSize: 2
  minSize: 2
  role: Node
  subnets:
  - eu-west-1a

Let’s do a quick dry-run to verify that our last edit will be correctly applied:

kops update cluster
Using cluster from kubectl context: k8s.infradev.io

I0228 09:26:01.246915   56008 executor.go:91] Tasks: 0 done / 55 total; 27 can run
I0228 09:26:02.859778   56008 executor.go:91] Tasks: 27 done / 55 total; 12 can run
I0228 09:26:03.455066   56008 executor.go:91] Tasks: 39 done / 55 total; 14 can run
I0228 09:26:05.044053   56008 executor.go:91] Tasks: 53 done / 55 total; 2 can run
I0228 09:26:05.160390   56008 executor.go:91] Tasks: 55 done / 55 total; 0 can run
Will create resources:
  IAMRolePolicy/additional.masters.k8s.infradev.io
  	Role                	name:masters.k8s.infradev.io id:AROAJRW7FGIS3QZOECW26

  IAMRolePolicy/additional.nodes.k8s.infradev.io
  	Role                	name:nodes.k8s.infradev.io id:AROAJ73WUQMRUWZQ4EYU4

Will modify resources:
  AutoscalingGroup/nodes.k8s.infradev.io
  	MinSize             	 1 -> 2
  	MaxSize             	 1 -> 2

Must specify --yes to apply changes

That looks good to me, let’s update our cluster:

kops update cluster --yes
Using cluster from kubectl context: k8s.infradev.io

I0228 09:26:37.155218   56196 executor.go:91] Tasks: 0 done / 55 total; 27 can run
I0228 09:26:38.504246   56196 executor.go:91] Tasks: 27 done / 55 total; 12 can run
I0228 09:26:39.285901   56196 executor.go:91] Tasks: 39 done / 55 total; 14 can run
I0228 09:26:40.634271   56196 executor.go:91] Tasks: 53 done / 55 total; 2 can run
I0228 09:26:40.869591   56196 executor.go:91] Tasks: 55 done / 55 total; 0 can run
I0228 09:26:40.869646   56196 dns.go:140] Pre-creating DNS records
I0228 09:26:41.571104   56196 update_cluster.go:204] Exporting kubecfg for cluster
Wrote config for k8s.infradev.io to "/Users/lcrisci/.kube/config"
Kops has set your kubectl context to k8s.infradev.io

Cluster changes have been applied to the cloud.


Changes may require instances to restart: kops rolling-update cluster

We don’t need any rolling-update in this context so all we need to do is to be patient until the new node gets provisioned:

kubectl get nodes -o wide
NAME                                          STATUS         AGE       EXTERNAL-IP
ip-172-20-33-218.eu-west-1.compute.internal   Ready          1m        34.251.a.b
ip-172-20-40-211.eu-west-1.compute.internal   Ready          15m       34.251.c.d
ip-172-20-53-218.eu-west-1.compute.internal   Ready,master   17m       34.251.e.f

So we’ve got a new node as part of our default nodes ig which we can connect to via ssh:

ssh -i ~/.ssh/kops_rsa admin@34.251.a.b
The authenticity of host '34.251.a.b (34.251.a.b)' can't be established.
ECDSA key fingerprint is SHA256:6q+Fidr35O11vCU3AUgNrvtNTMWLvFrlWXxM01YKw3s.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '34.251.a.b' (ECDSA) to the list of known hosts.

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
_____________________________________________________________________
WARNING! Your environment specifies an invalid locale.
 This can affect your user experience significantly, including the
 ability to manage packages. You may install the locales by running:

   sudo apt-get install language-pack-en
     or
   sudo locale-gen en_GB.UTF-8

To see all available language packs, run:
   apt-cache search "^language-pack-[a-z][a-z]$"
To disable this message for all users, run:
   sudo touch /var/lib/cloud/instance/locale-check.skip
_____________________________________________________________________

A quick check of the uptime just because I love to run commands:

admin@ip-172-20-33-218:~$ uptime
 08:31:15 up 3 min,  1 user,  load average: 0.19, 0.28, 0.13

This node is definitely new and ready to run our containers

Delete the cluster

kops delete cluster k8s.infradev.io --yes

Summary

I showed you what a simple default install of kops on AWS looked like.It’s quite straightforward and kops is already very advanced on the AWS integration side.

kops supports many different flags when creating a cluster, one of the most important for an advanced cluster being the –networking to install an overlay network and make it possible to define complex network policies and rules for interactions between different containers.

I’m planning to write more advanced posts around kops and automation in general.They will also cover advanced networking setups and different HA/Autoscaling and Backups/Restores scenarios.

Resources

Slack: http://slack.k8s.io/ ( #kops ) Github: https://github.com/kubernetes/kops Docs: https://github.com/kubernetes/kops/tree/master/docs

 
comments powered by Disqus