Kubernetes on LXD with Rancher 2.0: Part Two

Andrew Ernst
7 min readFeb 18, 2021

Author’s note: This article was written at least two years ago, and much of the information may be out-of-date. However, some of the fundamentals still seemed worth sharing to a greater audience. I make no guarantee that this all works for you, but I did have a functional Kubernetes cluster running in LXD when I was completed. I’ve since torn it all down, but lately have heard from people who have found my previous articles helpful, even in early 2021. Enjoy. Your mileage may vary. Happy computing!

In my last article, we created a LXD profile called docker that contained the necessary control group configuration, the config.security.nesting parameter, and we can now start the process of creating and running a system container for the Rancher master.

We’re going to level-up before creating the Rancher LXD container, by augmenting the profile to automatically install docker-ce when it spins up. We’ll take advantage of the inbuilt Cloud Init functionality that can be embedded in the profile’s config.user.user-data section. To do this, the lxc profile edit docker command will be used.

➜  ~ export EDITOR=emacs # yes, feel free to use vi(m)
➜ ~ lxc profile edit docker

At this point, you’ll have an editor open with the contents of the docker profile that was created in Part One of this series. We’re going to add this block of code into the next level down from config

config:
user.user-data: |
#cloud-config
repo_update: true
repo_upgrade: all
packages:
- docker-ce=17.12*
apt:
sources:
docker.list:
source: "deb https://download.docker.com/linux/ubuntu $RELEASE stable"
keyid: 0EBFCD88

We’re now ready to launch a container for the Rancher master, and it should have the Docker Community Edition version 17.06 software installed within about 30 seconds or so. To create the container, we’ll be issuing the launch action to the lxc client as follows:

➜  ~ lxc launch -p docker ubuntu:xenial rancher
Creating rancher
Starting rancher

Translated into English, this command instructs LXD to launch a container named rancher using the ubuntu:xenial image, and apply the LXD profile called docker. The cloud-init configuration will fire off when the container is launched on the first-boot.

You can tail the logs inside the container to confirm that the packages are being installed by cloud-init with the command lxc exec rancher -- tail -f /var/log/dpkg.log

➜  ~ lxc exec rancher -- tail -f /var/log/dpkg.log
2018-01-26 05:56:19 status config-files grub-common:amd64 2.02~beta2-36ubuntu3.15
2018-01-26 05:56:19 status config-files grub-common:amd64 2.02~beta2-36ubuntu3.15
2018-01-26 05:56:19 status not-installed grub-common:amd64 <none>
2018-01-26 05:56:19 trigproc man-db:amd64 2.7.5-1 <none>
.
.
.
2018-02-08 23:59:13 status installed docker-ce:amd64 17.06.2~ce-0~ubuntu

At this point in time, we can grab the command to install Rancher 2.0 from https://rancher.com/rancher2-0/

When installing, I learned there is a need to pass some extra parameters to the docker run to inform containerd to avoid Docker Engine validating a default AppArmor profile calleddocker-unconstrained.

You can short-cut this, by launching the Rancher container as a privileged docker container, but would also have to launch the LXD container as privileged — and I prefer to keep everything running as unprivileged.

The --security-opt=apparmor:unconfined is passed to the Docker engine to match the AppArmor profile of the LXD container in which it’s running. Thus the command for creating the rancher Docker container would be docker run -d --security-opt=apparmor:unconfined --restart=unless-stopped --name rancher_server -p 80:80 -p 443:443 rancher/rancher

➜  ~ lxc exec rancher bash
root@rancher:~# docker run -d --security-opt=apparmor:unconfined --restart=unless-stopped --name rancher_server -p 80:80 -p 443:443 rancher/rancher
Unable to find image 'rancher/rancher:latest' locally
latest: Pulling from rancher/rancher
124c757242f8: Pull complete
2ebc019eb4e2: Pull complete
dac0825f7ffb: Pull complete
82b0bb65d1bf: Pull complete
ef3b655c7f88: Pull complete
437f23e29d12: Pull complete
52931d58c1ce: Pull complete
b930be4ed025: Pull complete
4a2d2c2e821e: Pull complete
9137650edb29: Pull complete
f1660f8f83bf: Pull complete
a645405725ff: Pull complete
Digest: sha256:6d53d3414abfbae44fe43bad37e9da738f3a02e6c00a0cd0c17f7d9f2aee373a
Status: Downloaded newer image for rancher/rancher:latest
e1024269f064ac5927e5ce64a35acad50d59960b1d45e79aec4f976a7332f188

If you want more background on why the --security-opt value is required in this particular situation, take a quick read at this issue https://github.com/moby/moby/pull/14857

Now that the container is up, you should be able to access the Rancher WebUI in your browser, by pointing it to the IP address of the LXD container. You can grab the IP address with the lxc list rancher command:

➜  ~ lxc list rancher
+---------+---------+--------------------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+--------------------------------+------+------------+-----------+
| rancher | RUNNING | 192.168.51.69 (eth0) | | PERSISTENT | 0 |
| | | 172.17.0.1 (docker0) | | | |
+---------+---------+--------------------------------+------+------------+-----------+

You can also grab the IP address using a slightly more convoluted methodology, by executing lxc info rancher and grabbing the relevant line containing the IPv4 address on interface eth0

➜  ~ lxc info rancher | grep -E "eth0.*inet[^6]" | awk {'print $NF'}
192.168.51.69

When I point my browser to https://192.168.51.69 I’m presented with the Rancher WebUI login page:

Set a good password for your admin account as the first step in bootstrapping your Rancher and Kubernetes environment!

You’ll immediately be prompted to set a password for your admin account. Once you complete that process, you’ll be asked for the Rancher Server URL, which for this demo purpose will be the IP address of my LXD container.

You’ll be prompted for the Rancher Server URL. If you have DNS configured, it would be appropriate to add a hostname instead of an IP address at this point. (I’m being lazy for demonstration purposes)

Level-Up with LetsEncrypt Certificates: I was really impressed during the April Rancher Meetup, where Rancher Labs demonstrated the integration of LetsEncrypt certificates for the Rancher Master. You should definitely consider trying this out and using signed certificates for your Rancher environment. To watch that demo, take a look at the Youtube video at the moment they deploy the Rancher 2.0 Master with signed certificates.

Creating your Kubernetes Cluster

Ok, are you still with me? I hope so, because this is where the real excitement begins. We’re going to fire up some more LXD containers using the docker profile, and then join them to the Rancher environment as Kubernetes cluster nodes.

I’m going to cheat a bit, and fire off a for-loop to create five LXD containers named k8s001 through k8s005. This loop will work in bash or zsh:

➜  ~ for server in k8s{010..014}; do lxc launch -c security.privileged=1 -p docker ubuntu:x ${server} ; done;
Creating k8s010
Starting k8s010
Creating k8s011
Starting k8s011
Creating k8s012
Starting k8s012
Creating k8s013
Starting k8s013
Creating k8s014
Starting k8s014

We can confirm that the containers are running and have an IP address with the command lxc list | grep k8s — In the following code snippet, I took it one step further, and printed just the container name and the eth0 IPv4 address.

➜  ~ lxc list | grep k8s | awk -F'|' {'print $2 ": " $4'}
k8s010 : 192.168.51.206 (eth0)
k8s011 : 192.168.51.207 (eth0)
k8s012 : 192.168.51.208 (eth0)
k8s013 : 192.168.51.209 (eth0)
k8s014 : 192.168.51.210 (eth0

At this point, we’re going to focus our efforts within the Rancher GUI to define a new Kubernetes cluster, initially with no nodes. This will allow us to execute a docker command on each of the containers to bring up the rancher-agent container (which will, fire up the necessary k8s service containers). Choose “Add Cluster” and you will be presented with a choice of “Launch a Cloud Cluster”, “Create a Cluster (RKE)”, “Import an Existing Cluster”. For the purpose of this demo, we’re going to use the Rancher Kubernetes Engine to bootstrap our k8s cluster.

You’ll land on the global clusters page, where we’ll begin defining a new Kubernetes cluster.
I’m naming my cluster “homelab”, and choosing “CUSTOM” under “From my own existing nodes”

The next part of the process, is to define your Kubernetes version, CNI provider, and a few other configuration options for bootstrapping your cluster nodes. For the sake of this demo, I’m going to keep my CNI set as Canal, and will deploy the v1.11.2-rancher1-1 version of Kubernetes. This version has all the cadviser fixes that caused my previous attempts to stumble with Rancher 2.0 (I had originally written this document using an alpha version of Rancher 2.0)

Choose your settings for bootstrapping your k8s cluster on this page. When you continue, you’ll be presented with a docker run command to execute on your lxd hosts.
I’m choosing to run each of the available node roles on all of my hosts, just for simplicity sake.

At this point in time, you should ensure that each container has docker-ce installed on it.

Each of the LXD containers needs to have the Rancher agent installed, which we can do by writing a quick and dirty loop to lxc exec

Copy the command from the Rancher UI, and get back on your terminal. We’re going to modify the command to include the --security-opt flag so that Docker doesn’t look for the docker-default AppArmor profile.

As you can see in the screenshot, the command will look like this:

sudo docker run -d --privileged --security-opt "apparmor:unconfined" --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.0.8 --server https://192.168.51.69 --token x6kmmw58z9kt9nsdjbqh94x8x5zbcmwvnln9w8mgp87zq9tw6t4whv --ca-checksum 0df2c081131becd4bc80be49a019e414e5a468321cca66be1f8069d06fe2a8ec --etcd --controlplane --worker

This is where the magic happens, because we’re only moments away from a fully operational Kubernetes cluster. Let’s write that quick for loop to execute this command across the five lxd containers:

➜  ~ for container in k8s0{10..14}; do lxc exec ${container} -- docker run -d --privileged --security-opt apparmor:unconfined --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.0.8 --server https://192.168.51.69 --token x6kmmw58z9kt9nsdjbqh94x8x5zbcmwvnln9w8mgp87zq9tw6t4whv --ca-checksum 0df2c081131becd4bc80be49a019e414e5a468321cca66be1f8069d06fe2a8ec --etcd --controlplane --worker ; done;
Unable to find image 'rancher/rancher-agent:v2.0.8' locally
v2.0.8: Pulling from rancher/rancher-agent
124c757242f8: Pull complete
2ebc019eb4e2: Pull complete
dac0825f7ffb: Pull complete
82b0bb65d1bf: Pull complete
ef3b655c7f88: Pull complete
9750e7f516aa: Pull complete
bbcb46cc1cac: Pull complete
f3d67e2639ea: Pull complete
4c9aa41b309a: Pull complete
64cb19178381: Pull complete
Digest: sha256:aa2a164c18ea8b2f6b235186216448a9401ff3e02af064cadea569edc07b45e3
Status: Downloaded newer image for rancher/rancher-agent:v2.0.8
a13193e6c7b78e0dc5d27836cbd5def02bd408244b2ef8b40a7db8dbee613b02

Now, you should be able to click in the Rancher WebUI to see the nodes having been added to your cluster!

--

--

Andrew Ernst

Photographer, runner, cyclist, Linux & open source advocate, lover of strange and interesting music (and delicious coffee).