Migrating to K3s
- 5 minutes read - 988 wordsI’m migrating from MicroK8s to K3s. This post explains the installation and (re)configuration steps that I took including verification steps:
- Install K3s
- Use standalone
kubectland update${KUBECONFIG}(~/.kube/config) - Install System Upgrade Controller
- Install
kube-prometheusstack (including the Prometheus Operator) - Disable Grafana and Node Exporter
- Tweak default
scrapeInterval - Install Tailscale Operator
- Create Prometheus and Alertmanager Ingresses
- Patch Prometheus and Alertmanager
externalUrls - Tweak
Prometheusresource to allow anyserviceMonitors - Tweak
Prometheusresource to allow anyproemtheusRules
MicroK8s
Why abandon MicroK8s? I’ve been using MicroK8s for several years without issue but, after upgrading to Ubuntu 25.10 which includes a Rust replacement for sudo (and doesn’t support sudo -E), I created a problem for myself with MicroK8s and have been unable to restore a working Kubernetes cluster. I took the opportunity to reassess my distribution and have long thought to switch to K3s.
See this issue on the MicroK8s repo: https://github.com/canonical/microk8s/issues/5266
The solution to revert to the previous sudo is valid but, after doing this, I was able to install MicroK8s but unable to get the cluster to start. I tried this 3 times before giving up.
There is an easy way to swap sudos, sudo.ws is the non-rust version (1):
sudo update-alternatives --config sudo
There are 2 choices for the alternative sudo (providing /usr/bin/sudo).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/cargo/bin/sudo 50 auto mode
1 /usr/bin/sudo.ws 40 manual mode
2 /usr/lib/cargo/bin/sudo 50 manual mode
K3s
Installation is straightforward Quick Start
And you can verify using:
sudo k3s kubectl get node
Use standalone kubectl and update ${KUBECONFIG} (~/.kube/config)
I prefer to use a standalone kubectl and, on Linux, it’s default config file is ~/.kube./config
Since I have multiple contexts (servers, users) configured, I manually incorporate the output of:
sudo more /etc/rancher/k3s/k3s.yaml
System Upgrade Controller
This is useful but I’m unfamiliar with it. I took the shortest path to getting something working but will need to revist this. I’m only running k3s-server (not k3s-agent) and so could delete the Plan named k3s-agent but, it’s predicated on expressions that won’t be matched so…
kubectl apply \
--kustomize=github.com/rancher/system-upgrade-controller
namespace/system-upgrade created
serviceaccount/system-upgrade created
role.rbac.authorization.k8s.io/system-upgrade-controller created
clusterrole.rbac.authorization.k8s.io/system-upgrade-controller created
clusterrole.rbac.authorization.k8s.io/system-upgrade-controller-drainer created
rolebinding.rbac.authorization.k8s.io/system-upgrade created
clusterrolebinding.rbac.authorization.k8s.io/system-upgrade created
clusterrolebinding.rbac.authorization.k8s.io/system-upgrade-drainer created
configmap/default-controller-env created
deployment.apps/system-upgrade-controller created
kubectl apply \
--filename=https://raw.githubusercontent.com/rancher/system-upgrade-controller/refs/heads/master/examples/k3s-upgrade.yaml
plan.upgrade.cattle.io/k3s-server created
plan.upgrade.cattle.io/k3s-agent created
kubectl get plans \
--all-namespaces
NAMESPACE NAME IMAGE CHANNEL VERSION
system-upgrade k3s-agent rancher/k3s-upgrade v1.20.11+k3s1
system-upgrade k3s-server rancher/k3s-upgrade v1.20.11+k3s1
kube-prometheus stack
You definitely don’t just want Prometheus Operator but Prometheus itself, Alertmanger, possibly Grafana etc.
The best way to install this is using the Helm chat kube-prometheus-stack:
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--create-namespace \
--namespace=observability
level=WARN msg="unable to find exact version; falling back to closest available version" chart=kube-prometheus-stack requested="" selected=79.5.0
NAME: kube-prometheus-stack
LAST DEPLOYED: Wed Nov 19 08:57:47 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace observability get pods -l "release=kube-prometheus-stack"
Get Grafana 'admin' user password by running:
kubectl --namespace observability get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
Access Grafana local instance:
export POD_NAME=$(kubectl --namespace default get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname)
kubectl --namespace observability port-forward $POD_NAME 3000
Get your grafana admin user password by running:
kubectl get secret --namespace observability -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; echo
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
[Optional] Delete Grafana and Node Exporter
This is primarily just a resource issue for me but I delete Grafana and Node Exporter:
kubectl delete deployment/kube-prometheus-stack-grafana \
--namespace=observability
kubectl delete daemonset/kube-prometheus-stack-prometheus-node-exporter \
--namespace=observability
kubectl delete service/kube-prometheus-stack-grafana \
--namespace=observability
kubectl delete service/kube-prometheus-stack-prometheus-node-exporter \
--namespace=observability
Tweak default scrapeInterval
kubectl get prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--output=jsonpath={.spec.scrapeInterval}
30s
kubectl patch prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--type=merge \
--patch='{"spec":{"scrapeInterval":"120s"}}'
prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus patched
Install Tailscale Operator
If you’re using (and you should be) Tailscale, then the Tailscale Operator is very useful.
See the instructions as you’ll need to revise your tailnet policy file before installation but then:
TAILSCALE_OAUTH_CLIENT_ID=".,,"
TAILSCALE_OAUTH_CLIENT_SECRET="tskey-client-..."
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId="${TAILSCALE_OAUTH_CLIENT_ID}" \
--set-string oauth.clientSecret="${TAILSCALE_OAUTH_CLIENT_SECRET}" \
--wait
Per the documentation, the easiest way to confirm that the Operator is installed correctly is to look for tailscale-operator as one of the machines in the Tailscale admin console (https://login.tailscale.com/admin/machines).
Create Prometheus and Alertmanager Ingresses
You can:
kubectl port-forward service/kube-prometheus-stack-prometheus \
--namespace=observability \
9090:9090
And then browse Prometheus Web UI on http://localhost:9090
But, I prefer to create private|internal Ingresses for both:
HOST="prometheus"
echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus
spec:
defaultBackend:
service:
name: kube-prometheus-stack-prometheus
port:
number: 9090
ingressClassName: tailscale
tls:
- hosts:
- ${HOST}
" | kubectl create \
--filename=- \
--namespace=observability
HOST="alertmanager"
echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alertmanager
spec:
defaultBackend:
service:
name: kube-prometheus-stack-alertmanager
port:
number: 9093
ingressClassName: tailscale
tls:
- hosts:
- ${HOST}
" | kubectl create \
--filename=- \
--namespace=observability
NOTE In practice it’s much better to create these as files
Patch Prometheus and Alertmanager externalUrls
To ensure that the web UIs correctly reflect the Tailnet names in alerts etc., it’s important to patch the Prometheus and Alertmanager resource’s externalUrl:
DOMAIN="...ts.net` # Your Tailnet
HOST="prometheus"
kubectl patch prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--type=merge \
--patch "{\"spec\":{\"externalUrl\":\"https://${HOST}.${DOMAIN}\"}}"
HOST="alertmanager"
kubectl patch alertmanager/kube-prometheus-stack-alertmanager \
--namespace=observability \
--type=merge \
--patch "{\"spec\":{\"externalUrl\":\"https://${HOST}.${DOMAIN}\"}}"
Tweak Prometheus resource to allow any serviceMonitors
By default the Prometheus resource is restricted to serviceMonitors labeled with kube-prometheus-stack
To allow serviceMonitors anywhere in the cluster:
kubectl get prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--output=jsonpath="{.spec}"
{
...
"ruleNamespaceSelector": {
"matchLabels": {
"release": "kube-prometheus-stack"
}
},
"ruleSelector": {},
...
"serviceMonitorNamespaceSelector": {},
"serviceMonitorSelector": {
"matchLabels": {
"release": "kube-prometheus-stack"
}
},
...
}
kubectl patch prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--type=json \
--patch='[{"op":"replace","path":"/spec/ruleSelector","value":{}}]'
kubectl patch prometheus/kube-prometheus-stack-prometheus \
--namespace=observability \
--type=json \
--patch='[{"op":"replace","path":"/spec/serviceMonitorSelector","value":{}}]'
NOTE In this case, we must use a JSON Patch in order to replace the object’s value with
{}.
That’s all for now. This should give you a cluster onto which you can deploy solutions that create ServiceMonitors and PrometheusRules resources to be monitored by kube-prometheus.