Podman
I’ve read about Podman and been intrigued by it but never taken the time to install it and play around. This morning, walking with my dog, I listened to the almost-always-interesting Kubernetes Podcast and two of the principals behind Podman were on the show to discuss it.
I decided to install it and use it in this week’s project.
Here’s a working Podman deployment for gcp-oidc-token-proxy
ACCOUNT="..."
ENDPOINT="..."
# Can't match container name i.e. prometheus
POD="foo"
SECRET="${ACCOUNT}"
podman secret create ${SECRET} ${PWD}/${ACCOUNT}.json
# Pod publishes pod-port:container-port
podman pod create \
--name=${POD} \
--publish=9091:9090 \
--publish=7776:7777
PROMETHEUS=$(mktemp)
# Important
chmod go+r ${PROMETHEUS}
sed \
--expression="s|some-service-xxxxxxxxxx-xx.a.run.app|${ENDPOINT}|g" \
${PWD}/prometheus.yml > ${PROMETHEUS}
# Prometheus
# Requires --tty
# Can't include --publish but exposes 9090
podman run \
--detach --rm --tty \
--pod=${POD} \
--name=prometheus \
--volume=${PROMETHEUS}:/etc/prometheus/prometheus.yml \
docker.io/prom/prometheus:v2.30.2 \
--config.file=/etc/prometheus/prometheus.yml \
--web.enable-lifecycle
# GCP OIDC Token Proxy
# Can't include --publish but exposes 7777
podman run \
--detach --rm \
--pod=${POD} \
--name=gcp-oidc-token-proxy \
--secret=${SECRET} \
--env=GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/${SECRET} \
ghcr.io/dazwilkin/gcp-oidc-token-proxy:ec8fa9d9ab1b7fa47448ff32e34daa0c3d211a8d \
--port=7777
The prometheus
container includes a volume
mount.
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Golang Structured Logging w/ Google Cloud Logging
I’ve multiple components in an app and these are deployed across multiple Google Cloud Platform (GCP) services: Kubernetes Engine, Cloud Functions, Cloud Run, etc. Almost everything is written in Golang and I started the project using go-logr
.
logr
is in two parts: a Logger
that you use to write log entries; a LogSink
(adaptor) that consumes log entries and outputs them to a specific log implementation.
Initially, I defaulted to using stdr
which is a LogSink
for Go’s standard logging implementation. Something similar to the module’s example:
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
`gcloud beta run services replace`
TL;DR I’m working on a project that includes multiple Cloud Run services. I’ve been putting my
gcloud
head on to deploy these services thinking that it’s curious there’s no way to write the specs as YAML configs. Today, I learned that there is:gcloud beta run services replace
.
What prompted the discovery was some frustration trying to deploy a JSON-valued environment variable to Cloud Run:
local FIREBASE_CONFIG="{
apiKey: ${FIREBASE_API_KEY},
authDomain: ${FIREBASE_AUTH_DOMAIN},
projectId: ${FIREBASE_PROJECT},
storageBucket: ${FIREBASE_STORAGE_BUCKET},
messagingSenderId: ${FIREBASE_MESSAGING_SENDER},
appId: ${FIREBASE_APP}}"
gcloud run deploy ${SRV_NAME} \
--image=${IMAGE} \
--command="/server" \
--args="--endpoint=:${PORT}" \
--set-env-vars=FIREBASE_CONFIG="${FIREBASE_CONFIG}" \
--max-instances=1 \
--memory=256Mi \
--ingress=all \
--platform=managed \
--port=${PORT} \
--allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
gcloud
balks at this.
Infrastructure as Code
Problem
I’m building an application that comprises:
- Kubernetes¹
- Kubernetes Operator
- Cloud Firestore
- Cloud Functions
- Cloud Run
- Cloud Endpoints
- Stripe
- Firebase Authentication
¹ - I’m using Google Kubernetes Engine (GKE) but may include other managed Kubernetes offerings (e.g. Digital Ocean, Linode, Oracle). GKE clusters are manageable by
gcloud
but other platforms require other CLI tools. All are accessible from bash but are these supported by e.g. Terraform (see below)?
Many of the components are packaged as container images and, because I’m using GitHub to host the project’s repos (I’ll leave the monorepo discussion for another post), I’ve become inculcated and use GitHub Container Registry (GHCR) as the container repo.
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Stripe
It’s been almost a month since my last post. I’ve been occupied learning Stripe and integrating it into an application that I’m developing. The app benefits from a billing mechanism for prospective customers and, as far as I can tell, Stripe is the solution. I’d be interested in hearing perspectives on alternatives.
As with any platform, there’s good and bad and I’ll summarize my perspective on Stripe here. It’s been some time since I developed in JavaScript and this lack of familiarity has meant that the solution took longer than I wanted to develop. That said, before this component, I developed integration with Firebase Authentication and that required JavaScript’ing too and that was much easier (and more enjoyable).
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!