Below you will find pages that utilize the taxonomy term “Kubernetes”
Posts
Kubernetes Python SDK w/ CRDs
Responded to Get Custom K8s Resource using Python and found the CustomObjectsApi documentation unclear.
If you have a cluster and a kubeconfig file with a correctly configured current-context, so that you can successfully:
PLURAL="checks" kubectl get ${PLURAL} \ --all-namespaces NOTE I’m using Ackal’s CRDs in these examples.
Then you can use the following code to access the cluster’s REST API server to enumerate its CRDs:
main.py:
from __future__ import print_function from kubernetes import client, config from kubernetes.
Posts
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
Posts
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
Filtering metrics w/ Google Managed Prometheus Kubernetes metrics, metrics everywhere Google Metric Diagnostics and Metric Data Ingested Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Posts
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default sink that has comprehensive sets of NOT LOG_ID(X) inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails.
Posts
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring Cloud Logging pricing for Cloud Admins: How to approach it & save cost With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Posts
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check) using kube-rs and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator and spent some time today exploring its potential.
Posts
Secure (TLS) gRPC services with LKE
NOTE cert-manager is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli commands I’m using in this post as I found it slightly more quirky.
Posts
Secure (TLS) gRPC services with VKE
NOTE cert-manager is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Posts
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch rewrite that includes a solution to this problem.
Posts
Prometheus VPA Recommendations
Phew!
For Want of a Nail
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top Vertical Pod Autoscaler A (valuable) digression through PodMonitor kube-state-metrics `kubectl-patch Created a Graph References Kubernetes Resources Visual Studio Code has begun to bug me (reasonably) to add resources to Kubernetes manifests.
E.g.:
resources: limits: cpu: "1" memory: "512Mi" I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Posts
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Posts
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1 and v1 examples from the Stackoverflow question as they should be self-explanatory.
Posts
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Posts
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Posts
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
Posts
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances.
Posts
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]] PROJECT=[[YOUR-PROJECT]] SERVER="us.gcr.io" If not already:
gcloud projects create {$PROJECT} gcloud beta billing projects link ${PROJECT} \ --billing-account=${BILLING} gcloud services enable containerregistry.googleapis.com \ --project=${PROJECT} Container Registry IMAGE="busybox" # Or ... docker pull ${IMAGE} docker tag \ ${IMAGE} \ ${SERVER}/${PROJECT}/${IMAGE} docker push ${SERVER}/${PROJECT}/${IMAGE} gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE} Service Account Create a service account that’s permitted to download (read-only) images from this project’s registry
Posts
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort and (TCP) LoadBalancer options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used.
Posts
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc.