Kubernetes with KubeAdm, Ansible and Vagrant

Kubernetes is a pretty complex system and it is notoriously hard to install, also because there are many ways to install it.

As a result, there are tons of “out-of-the-box” solutions. Many cloud providers offer it preinstalled. There are also distributions like CoreOS offering ready-to-go solution at the operating system level, and all-in-one installers like kops for AWS.

However, I believe is it important to learn how to install it from scratch, step by step. Of course you do not have to install it when it is provided pre-installed (typically in clouds). But if you have to manage it, not understanding its installation is a big limitation. How are you supposed to troubleshoot it if you are not able to get it off of the ground without an “automated pilot” ?

So here there is my tutorial describing how to install Kubernetes from scratch in a development enviroment, using CentOS 7 and Vagrant. Deployment is further automated with Ansible. Procedures has been tested in OSX and Windows 10 with Windows Bash.

Prepare the servers

Let’s start creating the base environment as described in the picture.

At a minimum you need a master node and one worker node. But a single worker is of little use so I recommend at least 3 workers. Master and workers must be able to see each other, using stable ips.

Here is a Vagrantfile able to deploy this setup.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
domain   = 'kube'

# use two digits id below, please
nodes = [
{ :hostname => 'master', :ip => '10.0.0.10', :id => '10' },
{ :hostname => 'node1', :ip => '10.0.0.11', :id => '11' },
{ :hostname => 'node2', :ip => '10.0.0.12', :id => '12' },
{ :hostname => 'node3', :ip => '10.0.0.13', :id => '13' },
]

memory = 2000

$script = <<SCRIPT
sudo mv hosts /etc/hosts
chmod 0600 /home/vagrant/.ssh/id_rsa
usermod -a -G vagrant ubuntu
cp -Rvf /home/vagrant/.ssh /home/ubuntu
chown -Rvf ubuntu /home/ubuntu
apt-get -y update
apt-get -y install python-minimal python-apt
SCRIPT

Vagrant.configure("2") do |config|
config.ssh.insert_key = false
nodes.each do |node|
config.vm.define node[:hostname] do |nodeconfig|
nodeconfig.vm.box = "ubuntu/xenial64"
nodeconfig.vm.hostname = node[:hostname]
nodeconfig.vm.network :private_network, ip: node[:ip], virtualbox__intnet: domain
nodeconfig.vm.provider :virtualbox do |vb|
vb.name = node[:hostname]+"."+domain
vb.memory = memory
vb.cpus = 1
vb.customize ['modifyvm', :id, '--natdnshostresolver1', 'on']
vb.customize ['modifyvm', :id, '--natdnsproxy1', 'on']
vb.customize ['modifyvm', :id, '--macaddress1', "5CA1AB1E00"+node[:id]]
vb.customize ['modifyvm', :id, '--natnet1', "192.168/16"]
end
nodeconfig.vm.provision "file", source: "hosts", destination: "hosts"
nodeconfig.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
nodeconfig.vm.provision "shell", inline: $script
end
end
end

Note the Vagrantfile also installs a hosts file (located in the same directory), as follows:

1
2
3
4
5
127.0.0.1 localhost localhost.localdomain
10.0.0.10 master.kube master
10.0.0.11 node1.kube node1
10.0.0.12 node2.kube node2
10.0.0.13 node3.kube node3

Place those files in a folder on your machine, with Vagrant installed and execute vagrant up. You will end up with 4 nodes in your VirtualBox. Note each node takes 2 gigabytes so your machine needs at least 8 GB of memory available for the virtual machines.

Installing the required software

Now we have the (empty) nodes, we need to install the required software to run Kubernetes.

The Vagrantfile builds the nodes and installs an ssh key allowing to access them without a password. This is the required setup for using ansible. The second step to perform is the installation of the required packages for Kubernetes in the nodes. This task is performed by ansible.

Note before executing it, you need to setup ssh to be able to access the nodes without a password. You can do it easily with Vagrant with vagrant ssh-config >~/.ssh/config and then add to your hosts file the line:

127.0.0.1 master node1 node2 node3

You should be able then to access the servers with ssh vagrant@master, ssh vagrant@node1 etc. If it works you are ready to execute ansible. For clarity I split the ansible scripts in a series of snippets you are free to put it together with a main script and include_tasks. The whole script is available on GitHub, however.

First snippet is about installing the required software:

1
2
3
4
5
6
7
8
9
10
11
12
- shell: "apt-get -y update && apt-get install -y apt-transport-https"
- apt_repository:
repo: 'deb http://apt.kubernetes.io/ kubernetes-xenial main'
state: present
- name: install docker and kubernetes
apt: name={{ item }} state=present allow_unauthenticated=yes
with_items:
- docker.io
- kubelet
- kubeadm
- kubectl
- ntp

The key services are Docker and Kubelet. Docker is the essential service because Kubernetes is an orchestrator of Docker containers, so everything is built on top of Docker. The Kubelet is the main Kubernetes service since it manages Docker according Kubernetes rules.

We also need two command line clients: kubectl, the command line Kubernetes client, and kubeadm the command line installer. Also we need to install ntp time server because Kubernetes requires nanosecond correct time.

Once you have installed packages, there are a few mandatory configurations. This is the ansible script performing them:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- command: modprobe {{item}}
with_items:
- ip_vs
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
- lineinfile: path=/etc/modules line='{{item}}' create=yes state=present
with_items:
- ip_vs
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
- sysctl: name=net.ipv4.ip_forward value=1 state=present reload=yes sysctl_set=yes
- service: name=docker state=started enabled=yes
- service: name=ntp state=started enabled=yes
- service: name=kubelet state=started enabled=yes

Basically here we need to be sure a few kernel modules used by Kubernetes are loaded and services are started.

Configuring master

Once you have the nodes ready and configured, with the core services installed and running, we can install the Kubernetes master.

Kubernetes includes a number of services running inside docker, so once Docker is started, you can complete the installation of the master deploying the appropriate docker images.

This operation is performed by the kubeadm tool. All you need to do is the kubeadm init. Note that it will expose a server (the apiserver) to other services at a given IP. If you have more than an interface, the address may not be what you expect (as it happens in our VirtualBox Virtual Machines), so we are going to specify the address. Since we know the IP of the virtual machines in our cluster, we can provide the IP statically.

The kubeadm will then install all the required services in Docker and it will run for a while. It installs:

  • etcd storing cluster configurations
  • apiserver answering to kubectl requests
  • controller managing the cluster
  • scheduler deciding containers should be placed
  • dns for implementing service discovery
  • proxy for locating services in the cluster

Once the kubeadm completes its work, it will generate in its output a command, that must be executed in each of the other nodes to connect to the master.

The ansible file to deploy the master is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
- lineinfile: dest=/etc/sysctl.conf line='net.bridge.bridge-nf-call-ip6tables = 1' state=present
- lineinfile: dest=/etc/sysctl.conf line='net.bridge.bridge-nf-call-iptables = 1' state=present
- name: initialize kube
shell: >
kubeadm reset &&
sysctl -p &&
kubeadm init --apiserver-advertise-address=10.0.0.10 --pod-network-cidr=10.244.0.0/16
args:
creates: /etc/kubeadm-join.sh
register: kubeadm_out
- lineinfile:
path: /etc/kubeadm-join.sh
line: "{{kubeadm_out.stdout_lines[-1]}}"
create: yes
when: kubeadm_out.stdout.find("kubeadm join") != -1
- service: name=kubelet state=started enabled=yes
- file: name=/etc/kubectl state=directory
- name: fix configmap for proxy
shell: >
export KUBECONFIG=/etc/kubernetes/admin.conf ;
kubectl -n kube-system get cm/kube-proxy -o yaml
| sed -e 's!clusterCIDR: ""!clusterCIDR: "10.0.0.0/24"!' >/etc/kubectl/kube-proxy.map ;
kubectl -n kube-system replace cm/kube-proxy -f /etc/kubectl/kube-proxy.map ;
kubectl -n kube-system delete pods -l k8s-app=kube-proxy
args:
creates: /etc/kubectl/kube-proxy.map

Note here also a required “fix”: since in Vagrant we have 2 network interfaces on different networks, we have to change the configuration of the proxy to tell on which network the other cluster members are.

Installing a Network plugin

We cannot proceed to deploy the nodes until we have installed another component: the network plugin.

Kubernetes uses a special networking model that extends Docker, to connect the containers with a virtual network. It is implemented by many different plugins, each one with different advantages and disadvantages. We picked weave (probably the more widely used), so before moving to the nodes, we complete the installation of the master installing the appropriate networking plugin.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- name: install weave net
shell: >
export KUBECONFIG=/etc/kubernetes/admin.conf ;
export kubever=$(sudo kubectl version | base64 | tr -d '\n') ;
curl --location "https://cloud.weave.works/k8s/net?k8s-version=$kubever" >/etc/kubectl/weave.yml ;
kubectl apply -f /etc/kubectl/weave.yml
- shell: >
export KUBECONFIG=/etc/kubernetes/admin.conf ;
kubectl get pods -n kube-system -l name=weave-net
register: result
until: result.stdout.find("Running") != -1
retries: 100
delay: 10

Configuring nodes

Once the master is ready and nodes have installed the required software, we can complete the setup just executing kubeadm in each node with the join command, in order to complete the configuration.

However, nodes communicate with the master using a protected channel. Not everyone can connect to the master. So to join the cluster you need to provide some secret informations. This secret is a token, that is generated when you perform a kubeadm init on the master.

So to automate the node creation with ansible, we collect the output of the master after init, and distribute the command to the nodes where we execute it. This is a task performed by this ansible script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- shell: "cat /etc/kubeadm-join.sh"
register: cat_kubeadm_join
when: inventory_hostname == master_hostname
- set_fact:
kubeadm_join: "{{cat_kubeadm_join.stdout}}"
when: inventory_hostname == master_hostname
- name: join nodes
shell: >
systemctl stop kubelet ; kubeadm reset ;
echo "{{hostvars[master_hostname].kubeadm_join}}" >/etc/kubeadm-join.sh ;
bash /etc/kubeadm-join.sh
args:
creates: /etc/kubeadm-join.sh
when: inventory_hostname != master_hostname
- name: checking all nodes up
shell: >
export KUBECONFIG=/etc/kubernetes/admin.conf ;
kubectl get nodes {{item}}
register: result
until: result.stdout.find("Ready") != -1
retries: 100
delay: 10
with_items: "{{ groups['nodes'] }}"
when: inventory_hostname == master_hostname

The node initialization will also distribute to the nodes two containers that are critical for Kubernetes to work properly. The first is the (already mentioned) networking plugin, and the second is the proxy. The image shows the complete Kubernetes cluster, ready to work.

Now we can control the cluster, using the kubectl command on the master, so we will use it to ensure the cluster is up and running and all the nodes are ready. The important point to know is: to access the cluster we need to provide to kubectl, a configuration file with all the keys. This file is located in /etc/kubernetes/admin.conf.

We can either copy this file and distribute to the users or use it directly, pointing to it with the KUBECONFIG environment variable (but you need to be root in this case). To test if the cluster is completely up and running we use the second way but generally it is better to distribute the configuration to non-root users.

Conclusions

If you followed the tutorial, you have now a Kubernetes cluster up and running in your Virtual Box and you can play with it installing components.

All the code to do the installation is available on GitHub.

In the repository there are also instructions how to run the deployment also on Windows. Enjoy.