Below you will find pages that utilize the taxonomy term “GRPC”
Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create
and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet
) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet
:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Securing gRPC services using Tailscale
This is so useful that it’s worth its own post.
I write many gRPC services. As these generally run securely, it’s best to test them that way too but, even with e.g. Let’s Encrypt, it can be challenging to generate appropriate TLS certs.
Tailscale makes this trivial.
Assuming there’s a gRPC service running on localhost:50051
, we want to avoid -plaintext
:
PORT="50051"
grpcurl \
-plaintext 0.0.0.0:${PORT} \
list
NOTE I’m using
list
and assuming your service has reflection enabled but you can, of course, use relevant methods.
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Azure Container Apps
The majority of Ackal’s components are deployed to Google Cloud. However, by its nature, Ackal benefits from deployments that span cloud platforms. I’ve deployed Ackal’s gRPC health checks to Fly, and managed Kubernetes services on Linode and Vultr.
Today, I decided to revisit¹ Azure. Ackal uses Azure (Active Directory) for one of its OAuth providers. This time, I wanted to deploy a containerized gRPC service. Azure provides several container-oriented services. I decided to use Azure Container Apps and, in hindsight, find it analogous to Google Cloud Run.
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!
Firebase Authentication, Cloud Endpoints and gRPC (1of2)
I’m building a service that requires user authentication. The primary endpoint is a gRPC-based service. I would like to consider using certificate-based auth but this feels… challenging. Instead, I have been aware of, but never used, Firebase Authentication and was interested to see that Cloud Endpoints includes Firebase Authentication as one of its supported auth mechanisms. Curiosity piqued, I confirmed that gRPC supports Google token-based authentication.
The following is a summary of what I did but I’ll leave the extensive documentation to Google, (Google’s) Firebase and gRPC, all of which, in this case, provide really good explanations.
Cloud Endpoints combine OpenAPI and gRPC... or not!
See:
- Multiplexing gRPC and HTTP endpoints with Cloud Run
- gRPC, Cloud Run & Endpoints
- ESPv2: Configure Cloud Endpoints to proxy traffic to a Cloud Run multiplexed (gRPC|HTTP) service
Challenges:
- Cloud Run permits single port
- Cloud Run services publishing e.g. gRPC and Prometheus, must multiplex transports
- Cloud Run services publishing multiplexed transports are challenging to expose using Cloud Endpoints
Hypothesis #1: Multiplexed transports work with Cloud Run
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Fly.io
I spent some time over the weekend understanding Fly.io. It’s always fascinating to me how many smart people are building really neat solutions. Fly.io is subtly different to other platforms that I use (Kubernetes, GCP, DO, Linode) and I’ve found the Fly.io team to be highly responsive and helpful to my noob questions.
One of the team’s posts, Docker without Docker surfaced in my Feedly feed (hackernews) and it piqued my interest.
Dapr
It’s a good name, I read it as “dapper” but I frequently type “darp” :-(
Was interested to read that Dapr is now v1.0 and decided to check it out. I was initially confused between Dapr and service mesh functionality. But, having used Dapr, it appears to be more focused in aiding the development of (cloud-native) (distributed) apps by providing developers with abstractions for e.g. service discovery, eventing, observability whereas service meshes feel (!) more oriented towards simplifying the deployment of existing apps. Both use the concept of proxies, deployed alongside app components (as sidecars on Kubernetes) to provide their functionality to apps.
Remotely invoking WASM functions using gRPC and waPC
Following on from waPC & Protobufs, I can now remotely invoke (arbitrary) WASM functions:
Client:
The logging isn’t perfectly clear but, the client gets (a previously added) WASM binary from the server (using the SHA-256 of the WASM binary as a unique identifier). The result includes metadata that includes a protobuf descriptor of the WASM binary’s functions. The descriptor defines gRPC services (that represent the WASM functions) with input (parameters) and output (results) messages.
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
Rust Crate Transparency && Rust SDK for Google Trillian
I’m noodling the utility of a Transparency solution for Rust Crates. When developers push crates to Cargo, a bunch of metadata is associated with the crate. E.g. protobuf
. As with Golang Modules, Python packages on PyPi etc., there appears to be utility in making tamperproof recordings of these publications. Then, other developers may confirm that a crate pulled from cates.io is highly unlikely to have been changed.
On Linux, Cargo stores downloaded crates under ${HOME}/.crates/registry
. In the case of the latest version (2.12.0
) of protobuf
, on my machine, I have:
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Cloud Functions Simple(st) HTTP Multi-host Proxy
Tweaked yesterday’s solution so that it will randomly select one from several hosts with which it’s configured.
package proxy
import (
"log"
"math/rand"
"net/http"
"net/url"
"os"
"strings"
"time"
)
func robin() {
hostsList := os.Getenv("PROXY_HOST")
if hostsList == "" {
log.Fatal("'PROXY_HOST' environment variable should contain comma-separated list of hosts")
}
// Comma-separated lists of hosts
hosts := strings.Split(hostsList, ",")
urls := make([]*url.URL, len(hosts))
for i, host := range hosts {
var origin = Endpoint{
Host: host,
Port: os.Getenv("PROXY_PORT"),
}
url, err := origin.URL()
if err != nil {
log.Fatal(err)
}
urls[i] = url
}
s := rand.NewSource(time.Now().UnixNano())
q := rand.New(s)
Handler = func(w http.ResponseWriter, r *http.Request) {
// Pick one of the URLs at random
url := urls[q.Int31n(int32(len(urls)))]
log.Printf("[Handler] Forwarding: %s", url.String())
// Forward to it
reverseproxy(url, w, r)
}
}
This requires a minor tweak to the deployment to escape the commas within the PROXY_HOST
string to disambiguate these for gcloud
:
Cloud Functions Simple(st) HTTP Proxy
I’m investigating the use of LetsEncrypt for gRPC services. I found this straightforward post by Scott Devoid and am going to try this approach.
Before I can do that, I need to be able to publish services (make them Internet-accessible) and would like to try to continue to use GCP for free.
Some time ago, I wrote about using the excellent Microk8s on GCP. Using an f1-micro
, I’m hoping (!) to stay within the Compute Engine free tier. I’ll also try to be diligent and delete the instance when it’s not needed. This gives me a runtime platform and I can expose services to the Instance’s (Node)Ports but, I’d prefer to not be billed for a simple proxy.