Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Trivy vulnerability scanning
I build (and therefore) manage many container images. It’s easy (common?) to overlook that these images contain vulnerabilities, hopefully vulns that are fixed and that the images must be rebuilt to accommodate these changes.
I have used Google’s very expensive container vulnerability scanning tool but wanted something cheaper. I found this list of open source solutions on Reddit and decided to look into Trivy.
It’s possible to install Trivy via a package manager, a binary or to build the Go binary locally but I prefer to use containers whenever possible:
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile:
Securing gRPC services using Tailscale
This is so useful that it’s worth its own post.
I write many gRPC services. As these generally run securely, it’s best to test them that way too but, even with e.g. Let’s Encrypt, it can be challenging to generate appropriate TLS certs.
Tailscale makes this trivial.
Assuming there’s a gRPC service running on localhost:50051, we want to avoid -plaintext:
PORT="50051"
grpcurl \
-plaintext 0.0.0.0:${PORT} \
list
NOTE I’m using
listand assuming your service has reflection enabled but you can, of course, use relevant methods.
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc and preferably have it in your path.
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents1 - Convenience (since you can generate these using
protoc) language-specific types generated from the above e.g.google-cloudevents-go;google-cloudevents-pythonetc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof feature. I was unable to JSON marshal the rust generated by tonic for such a message.
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull-tag-push images to registry.fly.io but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):