Below you will find pages that utilize the taxonomy term “Kubernetes”
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull
-tag
-push
images to registry.fly.io
but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):
MicroK8s operability add-on
Spent time today yak-shaving which resulted in an unplanned migration from MicroK8s ‘prometheus’ add-on to the new and not fully-documented ‘observability’ add-on:
sudo microk8s.enable prometheus
Infer repository core for addon prometheus
DEPRECATION WARNING: 'prometheus' is deprecated and will soon be removed. Please use 'observability' instead.
...
The reason for the name change is unclear.
It’s unclear whether there’s a difference in the primary components that are installed too (I’d thought Grafana wasn’t included in ‘prometheus’), (Grafana) Loki and (Grafana) Tempo definitely weren’t included and I don’t want them either.
Kubernetes Python SDK w/ CRDs
Responded to Get Custom K8s Resource using Python and found the CustomObjectsApi
documentation unclear.
If you have a cluster and a kubeconfig file with a correctly configured current-context
, so that you can successfully:
PLURAL="checks"
kubectl get ${PLURAL} \
--all-namespaces
NOTE I’m using Ackal’s CRDs in these examples.
Then you can use the following code to access the cluster’s REST API server to enumerate its CRDs:
main.py
:
from __future__ import print_function
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_kube_config()
api = client.CustomObjectsApi()
# Ackal's Group|Version and some Kinds
group = 'ack.al'
version = 'v1'
plurals = ['checks','customers']
for plural in plurals:
try:
resp = api.list_cluster_custom_object(
group,
version,
plural,
)
for item in resp["items"]:
spec = item["spec"]
print(spec)
except ApiException as e:
print(e)
python3 -m venv venv
source venv/bin/activate
python3 -m pip install kubernetes==26.1.0
python3 main.py
That’s all!
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager
recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1
and v1
examples from the Stackoverflow question as they should be self-explanatory. I suspect (!?) that I should not have used Kubernete’s (API Server’s) CA for the Webhook but it could well be that I just don’t understand the correct approach.
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf
protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration
) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]]
PROJECT=[[YOUR-PROJECT]]
SERVER="us.gcr.io"
If not already:
gcloud projects create {$PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable containerregistry.googleapis.com \
--project=${PROJECT}
Container Registry
IMAGE="busybox" # Or ...
docker pull ${IMAGE}
docker tag \
${IMAGE} \
${SERVER}/${PROJECT}/${IMAGE}
docker push ${SERVER}/${PROJECT}/${IMAGE}
gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE}
Service Account
Create a service account that’s permitted to download (read-only) images from this project’s registry
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort
and (TCP) LoadBalancer
options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used. But, for the other services, NGINX Ingress is a good option and so I’ve been exploring NGINX Ingress (Controller) link. This Ingress appears to work except that I’ve been unable to get it to work with mutual TLS between the (NGINX) proxy and my backend services.
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.