Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull
-tag
-push
images to registry.fly.io
but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
MicroK8s operability add-on
Spent time today yak-shaving which resulted in an unplanned migration from MicroK8s ‘prometheus’ add-on to the new and not fully-documented ‘observability’ add-on:
sudo microk8s.enable prometheus
Infer repository core for addon prometheus
DEPRECATION WARNING: 'prometheus' is deprecated and will soon be removed. Please use 'observability' instead.
...
The reason for the name change is unclear.
It’s unclear whether there’s a difference in the primary components that are installed too (I’d thought Grafana wasn’t included in ‘prometheus’), (Grafana) Loki and (Grafana) Tempo definitely weren’t included and I don’t want them either.
Navigating Koyeb's API with Rust
I wrote about Navigating Koyeb’s Golang SDK. That client is generated using the OpenAPI Generator project using Koyeb’s Swagger (now OpenAPI) REST API spec.
This post shows how to generate a Rust SDK using the Generator and provides a very basic example of using the SDK.
The Generator will create a Rust library project:
VERS="v7.2.0"
PACKAGE_NAME="koyeb-api-client-rs"
PACKAGE_VERS="1.0.0"
podman run \
--interactive --tty --rm \
--volume=${PWD}:/local \
docker.io/openapitools/openapi-generator-cli:${VERS} \
generate \
-g=rust \
-i=https://developer.koyeb.com/public.swagger.json \
-o=/local/${PACKAGE_NAME} \
--additional-properties=\
packageName=${PACKAGE_NAME},\
packageVersion=${PACKAGE_VERS}
This will create the project in ${PWD}/${PACKAGE_NAME}
including the documentation at:
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Navigating Koyeb's Golang SDK
Ackal deploys gRPC Health Checking clients in locations around the World in order to health check services that are representative of customer need.
Koyeb offers multiple locations and I spent time today writing a client for Ackal to integrate with Koyeb using the Golang client for the Koyeb API.
The SDK is generated from Koyeb’s OpenAPI (nee Swagger) endpoint using openapi-generator-cli
. This is a smart, programmatic solution to ensuring that the SDK always matches the API definition but I found the result is idiosyncratic and therefore a little gnarly.
Capturing e.g. CronJob metrics with GMP
The deployment of Kube State Metrics for Google Managed Prometheus creates both a PodMonitoring
and ClusterPodMonitoring
.
The PodMonitoring
resource exposes metrics published on metric-self
port (8081).
The ClusterPodMonitoring
exposes metrics published on metric
port (8080) but this doesn’t include cronjob
-related metrics:
kubectl get clusterpodmonitoring/kube-state-metrics \
--output=jsonpath="{.spec.endpoints[0].metricRelabeling}" \
| jq -r .
[
{
"action": "keep",
"regex": "kube_(daemonset|deployment|replicaset|pod|namespace|node|statefulset|persistentvolume|horizontalpodautoscaler|job_created)(_.+)?",
"sourceLabels": [
"__name__"
]
}
]
NOTE The
regex
does not includekube_cronjob
and only includeskube_job_created
patterns.
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Prometheus Operator support an auth proxy for Service Discovery
CRD linting
Returning to yesterday’s failing tests, it’s unclear how to introspect the E2E tests.
kubectl get namespaces
NAME STATUS AGE
...
allns-s2os2u-0-90f56669 Active 22h
allns-s2qhuw-0-6b33d5eb Active 4m23s
kubectl get all \
--namespace=allns-s2os2u-0-90f56669
No resources found in allns-s2os2u-0-90f56669 namespace.
kubectl get all \
--namespace=allns-s2qhuw-0-6b33d5eb
NAME READY STATUS RESTARTS AGE
pod/prometheus-operator-6c96477b9c-q6qm2 1/1 Running 0 4m12s
pod/prometheus-operator-admission-webhook-68bc9f885-nq6r8 0/1 ImagePullBackOff 0 4m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP 10.152.183.247 <none> 443/TCP 4m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 4m12s
deployment.apps/prometheus-operator-admission-webhook 0/1 1 0 4m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-6c96477b9c 1 1 1 4m13s
replicaset.apps/prometheus-operator-admission-webhook-68bc9f885 1 1 0 4m8s
kubectl logs deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qhuw-0-6b33d5eb
Error from server (BadRequest): container "prometheus-operator-admission-webhook" in pod "prometheus-operator-admission-webhook-68bc9f885-nq6r8" is waiting to start: trying and failing to pull image
NAME="prometheus-operator-admission-webhook"
FILTER="{.spec.template.spec.containers[?(@.name==\"${NAME}\")].image}"
kubectl get deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qjz2-0-fad82c03 \
--output=jsonpath="${FILTER}"
quay.io/prometheus-operator/admission-webhook:52d1e55af
Want: