Bare Metal: Pico and CYW43
The “W” signifies that the board includes wireless. Interestingly, the wireless chip is an Infineon CYW43439) which is itself a microcontroller running its own ARM Cortex chip (M3). The Pico’s USB device includes another ARM microcontroller. So, with the dual Cortex (or Hazard) chips that are user programmable, and the 8 PIOs, these devices really pack a punch.
As a result of adding the wireless (microcontroller) chip to the Pico, the Pico W’s on-board LED is accessible only through the CYW43439. Yeah, weird but it makes for an interesting solution.
Bare Metal: WS2812
This one works!
Virtual WS2812s
I’d gone cough many years and never heard of 1-Wire and, suddenly, it’s everywhere.
Addressable LEDs are hugely popular in tinkerer circles. Addressable LEDs come in myriad forms (wheels, matrices) but commonly they’re sold as long strips. The part number is WS2812 and they use 1-Wire too. Each, often multi-color (RGB) LED (often known as a pixel), is combined with an IC that enables the “addressable” behavior.
Bare Metal: DS18B20
I’ve been working through Google’s Comprehensive Rust and, for the past couple of weeks, the Bare Metal Rust standalone course that uses the (excellent) micro:bit v2 that has a Nordic Semiconductor nRF52833 (an ARM Cortex-M4; interestingly its USB interface is also implemented using an ARM Cortex M0).
There’s a wealth of Rust tutorials for microcontrollers and I bought an ESP32-C3-DevKit-RUST-1 for another tutorial and spent some time with my favorite Pi Pico and a newly-acquired Debug Probe.
Gemini CLI (3/3)
Update 2025-07-08
Gemini CLI supports HTTP-based MCP server integration
So, it’s possible to replace the .gemini/settings.json
included in the original post with:
{
"theme": "Default",
"mcpServers": {
"ackal-mcp-server": {
"httpUrl": "http://localhost:7777/mcp",
"timeout": 5000
},
"prometheus-mcp-server": {
"httpUrl": "https://prometheus.{tailnet}/mcp",
"timeout": 5000
}
},
"selectedAuthType": "gemini-api-key"
}
This solution permits the addition of headers
too for e.g. including Authorization
Original
Okay, so not “Gemini Code Assist” but sufficiently similar that I think it warrants the “3/3” appellation.
Gemini Code Assist 'agent' mode without `npx mcp-remote` (2/3)
Solved!
Ugh.
Before I continue, one important detail from yesterday’s experience which I think I didn’t clarify is that, unlike the Copilot agent, it appears (!?) that Gemini agent only supports integration with MCP servers via stdio. As a result, the only way to integrate with HTTP-based MCP servers (local or remote) is to proxy traffic through stdio as mcp-remote
and the Rust example herein.
The most helpful change was to take a hint from the NPM mcp-remote
and create a log file. This helps because, otherwise the mcp-remote
process, because it’s launched by Visual Studio Code, well Gemini Code Assist agent, isn’t trivial to debug.
Gemini Code Assist 'agent' mode without `npx mcp-remote` (1/3)
Former Microsoftie and Googler:
Good documentation Extend your agent with Model Context Protocol
Not such good documentation: Using agentic chat as a pair programmer
Definition of “good” being, I was able to follow the clear instructions and it worked first time. Well done, Microsoft!
This space is moving so quickly and I’m happy to alpha test these companies’ solutions but (a) Google’s portfolio is a mess. This week I’ve tried (and failed) to use Gemini CLI (because I don’t want to run Node.JS on my host machine and it doesn’t work in a container: issue #1437) and now this.
Tailscale client metrics service discovery to Prometheus
I couldn’t summarize this in a title (even with an LLM’s help):
I wanted to:
- Run a Tailscale service discovery agent
- On a Tailscale node outside of the Kubernetes cluster
- Using Podman Quadlet
- Accessing it from the Kubernetes Cluster using the Tailscale’s egress proxy
- Accessing the proxy with a
kube-prometheus
ScrapeConfig
- In order that Prometheus would scrape the container for Tailscale client metrics
Long-winded? Yes but I had an underlying need in running the Tailscale Service Discoovery remotely and this configuration helped me achieve that.
Prometheus MCP Server
I was unable to find a Model Context Protocol (MCP) server implementation for Prometheus. I had a quiet weekend and so I’ve been writing one: prometheus-mcp-server
.
I used the code from the MCP for gRPC Health Checking protocol that I wrote about previously as a guide.
I wrote a series of stdin
and HTTP tests to have confidence that the service is working correctly but I had no MCP host.
I discovered that Visual Studio Code through its GitHub Copilot extension functions has a preview to use MCP servers i.e. function as an MCP host and access MCP servers.
MCP for gRPC Health Checking protocol
Model Context Protocol (MCP) is “all the rage” these days.
I stumbled upon protoc-gen-go-mcp
and think it’s an elegant application of two technologies: programmatically generating an MCP server from a gRPC protobuf.
I’m considering building an MCP server for Ackal but, thought I’d start with something simple: gRPC Health Checking protocol.
I was surprised to learn as I was doing this that there’s a new List
(Add List
method to gRPC Health service #143) added to grpc.health.v1.Health
. My (Ackal) healthcheck server does not yet implement it (see later).
Configuring Envoy to proxy Google Cloud Run v2
I’m building an emulator for Cloud Run. As I considered the solution, I assumed (more later) that I could implement Google’s gRPC interface for Cloud Run and use Envoy to proxy HTTP/REST requests to the gRPC service using Envoy’s gRPC-JSON transcoder.
Google calls this process Transcoding HTTP/JSON to gRPC which I think it a better description.
Google’s Cloud Run v2
(v1
is no longer published to the googleapis
repo) service.proto
includes the following Services
definition for CreateService
: