Krustlet on DO Managed Kubernetes
- 7 minutes read - 1437 wordsI’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Installation
I recommend:
- doctl though you should be able to do everything via the DigitalOcean console.
- If, like me, you use the Snap, then consider connecting kubectl and compute-ssh
- I also recommend adding SSH key fingerprints to DigitalOcean for additional security (and ease)
Managed Kubernetes
In my experience, a reliable distribution. doctl
is great too. You’ll need to determine various configuration values. You can either use doctl kubernetes options
commands to determine these:
doctl kubernetes options versions
doctl kubernetes options regions
doctl kubernetes options sizes
Or, just use the following example values:
CLUSTER=[[YOUR-CLUSTER-NAME]]
COUNT="1" # Number of worker nodes
VERSION="1.19.3-do.3"
SIZE="s-1vcpu-2gb"
REGION="sfo3"
And then to create the cluster:
doctl kubernetes cluster create ${CLUSTER} \
--auto-upgrade \
--count ${COUNT} \
--version ${VERSION} \
--size ${SIZE} \
--region ${REGION}
The cluster provisioning should (!) also revise your ${KUBECONFIG}
(probably {$HOME}/.kube/config
) with a context for the cluster and will update the current-context
. So, once the cluster is provisioned, from your host, you should be able to:
kubectl get nodes
Krustlet Droplet
You’ll need to determine various configuration values for the Droplet too. You can either use doctl compute
commands:
doctl compute size list
doctl compute region list
doctl compute image list --public
You will need to determine the ID of your ssh key, run the following command:
doctl compute ssh-key list
And set SSH_KEY
to the ID of the key that you wish to use with the Droplet.
Or, just use the following example values:
INSTANCE=[[YOUR-INSTANCE-NAME]]
SSH_KEY=[[YOUR-SSH-KEY-ID]] # See step above
SIZE="s-1vcpu-2gb"
REGION="sfo3"
IMAGE="debian-10-x64"
doctl compute droplet create ${INSTANCE} \
--region ${REGION} \
--size ${SIZE} \
--ssh-keys ${SSH_KEY} \
--tag-names krustlet,wasm \
--image ${IMAGE}
NOTE While
doctl
permits referring to droplets by name or ID, the platform permits you to have multiple Droplets with the same name.
To get the Droplet’s IP address, you can either:
doctl compute droplet get ${INSTANCE}
Or, if you have jq
and assuming you have one instance named ${INSTANCE}
, you can:
IP=$(\
doctl compute droplet get ${INSTANCE} \
--output=json \
| jq -r '.[0].networks.v4[] | select (.type|contains("public")) | .ip_address') && echo ${IP}
NOTE You’ll need this IP address after ssh’ing in to the Droplet too so, note the value.
Bootstrap Token
Krustlet
includes a useful bootstrap.sh
script. From your host workstation, run this command to create a Bootstrap Token for the Krustlet to use to join your Kubernetes cluster:
bash <(curl --silent https://raw.githubusercontent.com/deislabs/krustlet/master/docs/howto/assets/bootstrap.sh)
For more details on the script, see this link
The script does several things but the result is that the cluster will now contain a Secret representing a Bootstrap Token and a Kubernetes configuration file that the Krustlet can use to present its credentials (the token) to the cluster.
The name of the Secret is output by the script:
secret/bootstrap-token-[[random]] create
You can:
kubectl get secret/${TOKEN} --namespace=kube-system
Additionally, the Krustlet’s configuration file should be in ${HOME}/.krustlet/config
ls -la ${HOME}/.krustlet/config
bootstrap.conf
Bootstrapping Krustlet
In the following commands, you’ll need to be able to prove your identity (with your ssh private key). ssh keys are generally found under ${HOME}/.ssh
. Set the value of ID
to the path to your Digital Ocean private key:
ID=/path/to/you/privatekey
We need to copy the bootstrap.conf
file to the Droplet where we’ll run Krustlet:
scp -i ${ID} \
${HOME}/.krustlet/config/bootstrap.conf \
root@${IP}:.
Let’s now ssh onto the Droplet. There are 2 ways to do this. Either:
doctl compute ssh ${INSTANCE} \
--ssh-key-path ${ID}
Or, if you’d prefer to use ssh directly:
ssh -i ${ID} root@${IP}
Download the latest release of Krustlet and untar the result:
wget https://krustlet.blob.core.windows.net/releases/krustlet-v0.5.0-linux-amd64.tar.gz
tar xvf ./krustlet-v0.5.0-linux-amd64.tar.gz
And you should know have krustlet-wascc
and krustlet-wasi
binaries.
Now we can bootstrap the Krustlet:
IP=[[IP-ADDRESS]] # You'll need to copy the value here
KRUSTLET="krustlet-wascc" # Or krustlet-wasi
NODENAME="krustlet"
KUBECONFIG=${PWD}/krustlet.${HOSTNAME}.config \
./${KRUSTLET} \
--node-ip=${IP} \
--node-name=${NODENAME} \
--bootstrap-file=${PWD}/bootstrap.conf \
--cert-file=${PWD}/krustlet.${HOSTNAME}.crt \
--private-key-file=${PWD}/krustlet.${HOSTNAME}.key
There’s a lot going on here:
KUBECONFIG
this is where the bootstrapping process will create the Krustlet’s Kubernetes configHOSTNAME
this value should be set for you and match${INSTANCE}
IP
as mentioned above, when you determined the Droplet’s IPv4 address, we needed it herebootstrap.conf
copied to the Droplet and used by Krustlet to create a Certificate Signing Requestcert-file
andprivate-key-file
if omitted, a default location is used, this is clearer.
NOTE The Krustlet output will include various error messages. These result from the cluster attempting to schedule infrastructure Pods (e.g.
kube-proxy
) onto the Krustlet thinking that it’s a regular Kubernetes node. The Krustlet behaves very much like a regular kubelet but it is different and it is unable to run these Pods. You may ignore the errors.
Among the output from this command, you should see:
kubectl certificate approve ${HOSTNAME}-tls
You’ll need to return to your host machine and invoke this command there to confirm that the Krustlet is a legitimate user of the cluster:
kubectl certificate approve ${HOSTNAME}-tls
Once you’ve invoked that command, Krustlet should continue and you may proceed to testing it!
Running Krustlet
Once the Krustlet has booted successfully, it will create the following files:
File | Description |
---|---|
${PWD}/krustlet.${HOSTNAME}.config |
Krustlet’s Kubernetes config |
${PWD}/krustlet.${HOSTNAME}.crt |
Krustlet’s certificate |
${PWD}/krustlet.${HOSTNAME}.key |
Krustlet’s private key |
When you rerun the Krustlet, you may drop the --bootstrap-file=....
flag:
KUBECONFIG=${PWD}/krustlet.${HOSTNAME}.config \
./${KRUSTLET} \
--node-ip=${IP} \
--node-name=${NODENAME} \
--cert-file=${PWD}/krustlet.${HOSTNAME}.crt \
--private-key-file=${PWD}/krustlet.${HOSTNAME}.key
Testing
Once the Krustlet is running, you may confirm this:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
${CLUSTER}-default-pool-3z4ka Ready <none> 102m v1.19.3
krustlet Ready <none> 8s 0.5.0
NOTE If you stop the Krustlet, before restarting it, particularly if you’re swapping between WASI and WASCC, it’s a good idea to delete the node
kubectl delete node/${NODENAME}
DEMOS="https://raw.githubusercontent.com/deislabs/krustlet/master/demos"
If you’re using krustlet-wasi
, you can test:
kubectl apply \
--filename=${DEMOS}/wasi/hello-world-rust/k8s.yaml
All being well:
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-wasi-rust 0/1 ExitCode:0 0 3m10s
NOTE The
ExitCode:0
is OK because the Pods runs and completes (exits)
And:
kubectl logs pod/hello-world-wasi-rust
hello from stdout!
hello from stderr!
POD_NAME=hello-world-wasi-rust
FOO=bar
CONFIG_MAP_VAL=cool stuff
Args are: []
Bacon ipsum dolor amet chuck turducken porchetta, tri-tip spare ribs t-bone ham hock. Meatloaf
pork belly leberkas, ham beef pig corned beef boudin ground round meatball alcatra jerky.
Pancetta brisket pastrami, flank pork chop ball tip short loin burgdoggen. Tri-tip kevin
shoulder cow andouille. Prosciutto chislic cupim, short ribs venison jerky beef ribs ham hock
short loin fatback. Bresaola meatloaf capicola pancetta, prosciutto chicken landjaeger andouille
swine kielbasa drumstick cupim tenderloin chuck shank. Flank jowl leberkas turducken ham tongue
beef ribs shankle meatloaf drumstick pork t-bone frankfurter tri-tip.
If you’re using krustlet-wascc
, you can test:
kubectl apply \
--filename=${DEMOS}/wascc/hello-world-assemblyscript/k8s.yaml
Unfortunately, the AssemblyScript solution is not working.
The Pod appears to correctly register the port:
ss --tcp --listening --process | grep http-alt
LISTEN 0 128 0.0.0.0:http-alt 0.0.0.0:* users:(("krustlet-wascc",pid=958,fd=19))
But, it does not accept HTTP (GET) requests:
telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
HTTP/1.1 408 Request Timeout
content-length: 0
connection: close
Investigating :-(
Tidy
When you’re done, you can delete the Droplet:
doctl compute droplet delete ${INSTANCE}
And the cluster:
doctl kubernetes cluster delete ${CLUSTER}
NOTE The
cluster delete
command should (!) also remove thecluster
,context
anduser
from your${KUBECONFIG}
file (usually${HOME}/.kube/config
). It will leave thecurrent-context
value unset so you may wish to revise this to point to the cluster context that you were using previously.
Please, always double-check that the resources have indeed been deleted so that you are no longer paying for them!
Anticipated Errors
Because the Kubernetes cluster thinks that the Krustlet node is a regular kubectl (worker node), it attempts to schedule Pods onto it as it would with any other worker node. These will fail because, although the Krustlet acts like a kubelet, it is not a kubelet.
These errors may be ignored.
For this reason, Digital Oceans’ do-node-agent
, the kube-proxy
and Cilium’s Pods will fail to deploy to the Krustlet:
Cannot run do-node-agent: spec specifies container args which are not supported on wasCC
Cannot run kube-proxy-k2nn2: spec specifies init containers which are not supported on wasCC
Cannot run cilium-g8cxf: spec specifies init containers which are not supported on wasCC
Cannot run csi-node-driver-registrar: spec specifies container args which are not supported on wasCC