Deploying to K3s
A simple deployment of cAdvisor to K3s and to confirm the ability to expose Ingresses using Tailscale Kubernetes Operator (TLS) and – since it’s already installed with K3s – Traefik (non-TLS).
local image = "gcr.io/cadvisor/cadvisor:latest";
local labels = {
app: "cadvisor",
};
local name = std.extVar("NAME");
local node_ip = std.extVar("NODE_IP");
local port = 8080;
local deployment = {
apiVersion: "apps/v1",
kind: "Deployment",
metadata: {
name: name,
labels: labels,
},
spec: {
replicas: 1,
selector: {
matchLabels: labels,
},
template: {
metadata: {
labels: labels,
},
spec: {
containers: [
{
name: name,
image: image,
ports: [
{
name: "http",
containerPort: 8080,
protocol: "TCP",
},
],
resources: {
limits: {
memory: "500Mi",
},
requests: {
cpu: "250m",
memory: "250Mi",
},
},
securityContext: {
allowPrivilegeEscalation: false,
privileged: false,
readOnlyRootFilesystem: true,
runAsGroup: 1000,
runAsNonRoot: true,
runAsUser: 1000,
},
},
],
},
},
},
};
local ingresses = [
{
// Tailscale Ingress TLS (non-public)
apiVersion: "networking.k8s.io/v1",
kind: "Ingress",
metadata: {
name: "tailscale",
labels: labels,
},
spec: {
ingressClassName: "tailscale",
defaultBackend: {
service: {
name: name,
port: {
number: port,
},
},
},
tls: [
{
hosts: [
name,
],
},
],
},
},
{
// Traefik Ingress non-TLS (non-public)
apiVersion: "networking.k8s.io/v1",
kind: "Ingress",
metadata: {
name: "traefik",
labels: labels,
},
spec: {
ingressClassName: "traefik",
rules: [
{
host: std.format(
"%(name)s.%(node_ip)s.nip.io", {
name: name,
node_ip: node_ip,
},
),
http: {
paths: [
{
path: "/",
pathType: "Prefix",
backend: {
service: {
name: name,
port: {
number: port,
},
},
},
},
],
},
},
],
},
},
];
local prometheusrule = {
// Alternatively parameterize the duration
local duration = 10,
apiVersion: "monitoring.coreos.com/v1",
kind: "PrometheusRule",
metadata: {
name: name,
labels: labels,
},
spec: {
groups: [
{
name: name,
rules: [
{
alert: "TestcAdvsiorDown",
annotations: {
summary: "Test cAdvisor instance is down or not scraping metrics.",
description: std.format(
"The cAdvisor instance {{ $labels.instance }} has not been scraping metrics for more than %(duration)s minutes.", {
duration: duration,
},
),
},
expr: "absent(cadvisor_version_info{namespace=\"k3s-test\"}) or (cadvisor_version_info{namespace=\"k3s-test\"}!=1)",
"for": std.format(
"%(duration)sm", {
duration: duration,
},
),
labels: labels {
severity: "warning",
},
},
],
},
],
},
};
local service = {
apiVersion: "v1",
kind: "Service",
metadata: {
name: name,
labels: labels,
},
spec: {
selector: labels,
ports: [
{
name: "http",
port: port,
targetPort: port,
protocol: "TCP",
},
],
},
};
local serviceaccount = {
apiVersion: "v1",
kind: "ServiceAccount",
metadata: {
name: name,
labels: labels,
},
};
local servicemonitor = {
apiVersion: "monitoring.coreos.com/v1",
kind: "ServiceMonitor",
metadata: {
name: name,
labels: labels,
},
spec: {
selector: {
matchLabels: labels,
},
endpoints: [
{
interval: "120s",
path: "/metrics",
port: "http",
scrapeTimeout: "30s",
},
],
},
};
// Output
{
apiVersion: "v1",
kind: "List",
items: [
deployment,
prometheusrule,
service,
serviceaccount,
servicemonitor,
] + ingresses,
}
I like to have a script that applies Jsonnet to the file:
Migrating to K3s
I’m migrating from MicroK8s to K3s. This post explains the installation and (re)configuration steps that I took including verification steps:
- Install K3s
- Use standalone
kubectland update${KUBECONFIG}(~/.kube/config) - Install System Upgrade Controller
- Install
kube-prometheusstack (including the Prometheus Operator) - Disable Grafana and Node Exporter
- Tweak default
scrapeInterval - Install Tailscale Operator
- Create Prometheus and Alertmanager Ingresses
- Patch Prometheus and Alertmanager
externalUrls - Tweak
Prometheusresource to allow anyserviceMonitors - Tweak
Prometheusresource to allow anyproemtheusRules
MicroK8s
Why abandon MicroK8s? I’ve been using MicroK8s for several years without issue but, after upgrading to Ubuntu 25.10 which includes a Rust replacement for sudo (and doesn’t support sudo -E), I created a problem for myself with MicroK8s and have been unable to restore a working Kubernetes cluster. I took the opportunity to reassess my distribution and have long thought to switch to K3s.
Using FauxRPC to debug transcoding HTTP/JSON to gRPC
A Stack overflow question involving transcoding HTTP/JSON to gRPC, piqued my interest. I had a hunch on the solution but was initially dissuaded from attempting a repro because of the complexity:
- recreate proto
- implement stubs
- deploy gRPC-Gateway
I then realized that FauxRPC would probably address much of the complexity and it did.
I created foo.proto:
syntax = "proto3";
import "google/api/annotations.proto";
import "google/api/field_behavior.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
// Was not defined in the question.
service Foo {
rpc FetchResource(GetResourceRequest) returns (ResourceResponse) {
option (google.api.http) = {get: "/v1/resource/{resource_id}/group"};
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_operation) = {
summary: "Get Resource from group"
description: "Retrieve resource info"
};
}
};
// Should be named FetchResourceRequest to match the RPC method name.
message GetResourceRequest {
string resource_id = 1 [
(google.api.field_behavior) = REQUIRED,
(grpc.gateway.protoc_gen_openapiv2.options.openapiv2_field) = {
description: "Resource UUID v4."
example: "\"81042622-4f02-4e85-a896-172edd5381b6\""
}
];
ResourceFilter resource_filter = 2 [
(google.api.field_behavior) = OPTIONAL,
(grpc.gateway.protoc_gen_openapiv2.options.openapiv2_field) = {
description: "\"RESOURCE_FILTER_FIRST\""
}
];
}
// Empty response for demonstration purposes.
// Was not defined in the question.
// Should be named FetchResourceResponse to match the RPC method name.
message ResourceResponse {}
enum ResourceFilter {
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_enum) = {
title: "Resource Filter"
example: "\"RESOURCE_FILTER_FIRST\""
};
RESOURCE_FILTER_UNSPECIFIED = 0;
RESOURCE_FILTER_FIRST = 1;
}
The proto depends on googleapis (Google’s repo containing definitions for Google’s services) and grpc-gateway (containing protoc plugins and protobuf sources).
Bare Metal: Pico and CYW43
The “W” signifies that the board includes wireless. Interestingly, the wireless chip is an Infineon CYW43439) which is itself a microcontroller running its own ARM Cortex chip (M3). The Pico’s USB device includes another ARM microcontroller. So, with the dual Cortex (or Hazard) chips that are user programmable, and the 8 PIOs, these devices really pack a punch.
As a result of adding the wireless (microcontroller) chip to the Pico, the Pico W’s on-board LED is accessible only through the CYW43439. Yeah, weird but it makes for an interesting solution.
Bare Metal: WS2812
This one works!
Virtual WS2812s
I’d gone cough many years and never heard of 1-Wire and, suddenly, it’s everywhere.
Addressable LEDs are hugely popular in tinkerer circles. Addressable LEDs come in myriad forms (wheels, matrices) but commonly they’re sold as long strips. The part number is WS2812 and they use 1-Wire too. Each, often multi-color (RGB) LED (often known as a pixel), is combined with an IC that enables the “addressable” behavior.
Bare Metal: DS18B20
I’ve been working through Google’s Comprehensive Rust and, for the past couple of weeks, the Bare Metal Rust standalone course that uses the (excellent) micro:bit v2 that has a Nordic Semiconductor nRF52833 (an ARM Cortex-M4; interestingly its USB interface is also implemented using an ARM Cortex M0).
There’s a wealth of Rust tutorials for microcontrollers and I bought an ESP32-C3-DevKit-RUST-1 for another tutorial and spent some time with my favorite Pi Pico and a newly-acquired Debug Probe.
Gemini CLI (3/3)
Update 2025-07-08
Gemini CLI supports HTTP-based MCP server integration
So, it’s possible to replace the .gemini/settings.json included in the original post with:
{
"theme": "Default",
"mcpServers": {
"ackal-mcp-server": {
"httpUrl": "http://localhost:7777/mcp",
"timeout": 5000
},
"prometheus-mcp-server": {
"httpUrl": "https://prometheus.{tailnet}/mcp",
"timeout": 5000
}
},
"selectedAuthType": "gemini-api-key"
}
This solution permits the addition of headers too for e.g. including Authorization
Original
Okay, so not “Gemini Code Assist” but sufficiently similar that I think it warrants the “3/3” appellation.
Gemini Code Assist 'agent' mode without `npx mcp-remote` (2/3)
Solved!
Ugh.
Before I continue, one important detail from yesterday’s experience which I think I didn’t clarify is that, unlike the Copilot agent, it appears (!?) that Gemini agent only supports integration with MCP servers via stdio. As a result, the only way to integrate with HTTP-based MCP servers (local or remote) is to proxy traffic through stdio as mcp-remote and the Rust example herein.
The most helpful change was to take a hint from the NPM mcp-remote and create a log file. This helps because, otherwise the mcp-remote process, because it’s launched by Visual Studio Code, well Gemini Code Assist agent, isn’t trivial to debug.
Gemini Code Assist 'agent' mode without `npx mcp-remote` (1/3)
Former Microsoftie and Googler:
Good documentation Extend your agent with Model Context Protocol
Not such good documentation: Using agentic chat as a pair programmer
Definition of “good” being, I was able to follow the clear instructions and it worked first time. Well done, Microsoft!
This space is moving so quickly and I’m happy to alpha test these companies’ solutions but (a) Google’s portfolio is a mess. This week I’ve tried (and failed) to use Gemini CLI (because I don’t want to run Node.JS on my host machine and it doesn’t work in a container: issue #1437) and now this.
Tailscale client metrics service discovery to Prometheus
I couldn’t summarize this in a title (even with an LLM’s help):
I wanted to:
- Run a Tailscale service discovery agent
- On a Tailscale node outside of the Kubernetes cluster
- Using Podman Quadlet
- Accessing it from the Kubernetes Cluster using the Tailscale’s egress proxy
- Accessing the proxy with a
kube-prometheusScrapeConfig - In order that Prometheus would scrape the container for Tailscale client metrics
Long-winded? Yes but I had an underlying need in running the Tailscale Service Discoovery remotely and this configuration helped me achieve that.