gRPC-Web w/ FauxRPC and Rust
After recently discovering FauxRPC, I was sufficiently impressed that I decided to use it to test Ackal’s gRPC services using rust.
FauxRPC provides multi-protocol support and so, after successfully implementing the faux gRPC client tests, I was compelled to try gRPC-Web too. For no immediate benefit other than, it’s there, it’s free and it’s interesting. As an aside, the faux REST client tests worked without issue using Reqwest.
Unfortunately, my optimism hit a wall with gRPC-Web and what follows is a summary of my unresolved issue.
FauxRPC using gRPCurl, Golang and rust
Read FauxRPC + Testcontainers on Hacker News and was intrigued. I spent a little more time “evaluating” this than I’d planned because I’m forcing myself to use rust as much as possible and my ignorance (see below) caused me some challenges.
The technology is interesting and works well. The experience helped me explore Testcontainers too which I’d heard about but not explored until this week.
For my future self:
What | What? |
---|---|
FauxRPC | A general-purpose tool (built using Buf’s Connect) that includes registry and stub (gRPC) services that can be (programmatically) configured (using a Protobuf descriptor) and stubs (example method responses) to help test gRPC implementations. |
Testcontainers | Write code (e.g. rust) to create and interact (test)services (running in [Docker] containers). |
Connect | (More than a) gRPC implementation used by FauxRPC |
gRPCurl | A command-line gRPC tool. |
I started following along with FauxRPC’s Testcontainers example but, being unfamiliar with Connect, I wasn’t familiar with its Eliza service. The service is available on demo.connectrpc.com:443
and is described using eliza.proto
as part of examples-go
. Had I realized this sooner, I would have used this example rather than the Health Checking protocol.
Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create
and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet
) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet
:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Trivy vulnerability scanning
I build (and therefore) manage many container images. It’s easy (common?) to overlook that these images contain vulnerabilities, hopefully vulns that are fixed and that the images must be rebuilt to accommodate these changes.
I have used Google’s very expensive container vulnerability scanning tool but wanted something cheaper. I found this list of open source solutions on Reddit and decided to look into Trivy.
It’s possible to install Trivy via a package manager, a binary or to build the Go binary locally but I prefer to use containers whenever possible:
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Securing gRPC services using Tailscale
This is so useful that it’s worth its own post.
I write many gRPC services. As these generally run securely, it’s best to test them that way too but, even with e.g. Let’s Encrypt, it can be challenging to generate appropriate TLS certs.
Tailscale makes this trivial.
Assuming there’s a gRPC service running on localhost:50051
, we want to avoid -plaintext
:
PORT="50051"
grpcurl \
-plaintext 0.0.0.0:${PORT} \
list
NOTE I’m using
list
and assuming your service has reflection enabled but you can, of course, use relevant methods.
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.