Tag: Buf
gRPC-Web w/ FauxRPC and Rust
After recently discovering FauxRPC, I was sufficiently impressed that I decided to use it to test Ackal’s gRPC services using rust.
FauxRPC provides multi-protocol support and so, after successfully implementing the faux gRPC client tests, I was compelled to try gRPC-Web too. For no immediate benefit other than, it’s there, it’s free and it’s interesting. As an aside, the faux REST client tests worked without issue using Reqwest.
Unfortunately, my optimism hit a wall with gRPC-Web and what follows is a summary of my unresolved issue.
Tag: FauxRPC
gRPC-Web w/ FauxRPC and Rust
After recently discovering FauxRPC, I was sufficiently impressed that I decided to use it to test Ackal’s gRPC services using rust.
FauxRPC provides multi-protocol support and so, after successfully implementing the faux gRPC client tests, I was compelled to try gRPC-Web too. For no immediate benefit other than, it’s there, it’s free and it’s interesting. As an aside, the faux REST client tests worked without issue using Reqwest.
Unfortunately, my optimism hit a wall with gRPC-Web and what follows is a summary of my unresolved issue.
FauxRPC using gRPCurl, Golang and rust
Read FauxRPC + Testcontainers on Hacker News and was intrigued. I spent a little more time “evaluating” this than I’d planned because I’m forcing myself to use rust as much as possible and my ignorance (see below) caused me some challenges.
The technology is interesting and works well. The experience helped me explore Testcontainers too which I’d heard about but not explored until this week.
For my future self:
What | What? |
---|---|
FauxRPC | A general-purpose tool (built using Buf’s Connect) that includes registry and stub (gRPC) services that can be (programmatically) configured (using a Protobuf descriptor) and stubs (example method responses) to help test gRPC implementations. |
Testcontainers | Write code (e.g. rust) to create and interact (test)services (running in [Docker] containers). |
Connect | (More than a) gRPC implementation used by FauxRPC |
gRPCurl | A command-line gRPC tool. |
I started following along with FauxRPC’s Testcontainers example but, being unfamiliar with Connect, I wasn’t familiar with its Eliza service. The service is available on demo.connectrpc.com:443
and is described using eliza.proto
as part of examples-go
. Had I realized this sooner, I would have used this example rather than the Health Checking protocol.
Tag: Rust
gRPC-Web w/ FauxRPC and Rust
After recently discovering FauxRPC, I was sufficiently impressed that I decided to use it to test Ackal’s gRPC services using rust.
FauxRPC provides multi-protocol support and so, after successfully implementing the faux gRPC client tests, I was compelled to try gRPC-Web too. For no immediate benefit other than, it’s there, it’s free and it’s interesting. As an aside, the faux REST client tests worked without issue using Reqwest.
Unfortunately, my optimism hit a wall with gRPC-Web and what follows is a summary of my unresolved issue.
FauxRPC using gRPCurl, Golang and rust
Read FauxRPC + Testcontainers on Hacker News and was intrigued. I spent a little more time “evaluating” this than I’d planned because I’m forcing myself to use rust as much as possible and my ignorance (see below) caused me some challenges.
The technology is interesting and works well. The experience helped me explore Testcontainers too which I’d heard about but not explored until this week.
For my future self:
What | What? |
---|---|
FauxRPC | A general-purpose tool (built using Buf’s Connect) that includes registry and stub (gRPC) services that can be (programmatically) configured (using a Protobuf descriptor) and stubs (example method responses) to help test gRPC implementations. |
Testcontainers | Write code (e.g. rust) to create and interact (test)services (running in [Docker] containers). |
Connect | (More than a) gRPC implementation used by FauxRPC |
gRPCurl | A command-line gRPC tool. |
I started following along with FauxRPC’s Testcontainers example but, being unfamiliar with Connect, I wasn’t familiar with its Eliza service. The service is available on demo.connectrpc.com:443
and is described using eliza.proto
as part of examples-go
. Had I realized this sooner, I would have used this example rather than the Health Checking protocol.
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Navigating Koyeb's API with Rust
I wrote about Navigating Koyeb’s Golang SDK. That client is generated using the OpenAPI Generator project using Koyeb’s Swagger (now OpenAPI) REST API spec.
This post shows how to generate a Rust SDK using the Generator and provides a very basic example of using the SDK.
The Generator will create a Rust library project:
VERS="v7.2.0"
PACKAGE_NAME="koyeb-api-client-rs"
PACKAGE_VERS="1.0.0"
podman run \
--interactive --tty --rm \
--volume=${PWD}:/local \
docker.io/openapitools/openapi-generator-cli:${VERS} \
generate \
-g=rust \
-i=https://developer.koyeb.com/public.swagger.json \
-o=/local/${PACKAGE_NAME} \
--additional-properties=\
packageName=${PACKAGE_NAME},\
packageVersion=${PACKAGE_VERS}
This will create the project in ${PWD}/${PACKAGE_NAME}
including the documentation at:
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
pest: parsing in Rust
A Microsoft engineer introduced me to pest
as a way to introduce service filtering in a ZeroConf plugin that I’m prototyping for Akri. It’s been fun to learn but I worry that, because I won’t use it frequently, I’m going to quickly forget what I’ve done. So, here are my notes.
Here’s the problem, I’d like to be able to provide users of the ZeroConf plugin with a string-based filter that permits them to filter the services discovered when the Akri agent browses a network.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
Deploying a Rust HTTP server to DigitalOcean App Platform
DigitalOcean launched an App Platform with many Supported Languages and Frameworks. I used Golang first, then wondered how to use non-natively-supported languages, i.e. Rust.
The good news is that Docker is a supported framework and so, you can run pretty much anything.
Repo: https://github.com/DazWilkin/do-apps-rust
Rust
I’m a Rust noob. I’m always receptive to feedback on improvements to the code. I looked to mirror the Golang example. I’m using rocket and rocket-prometheus for the first time:
You will want to install rust nightly (as Rocket has a dependency that requires it) and then you can override the default toolchain for the current project using:
Minimizing WASM binaries
I’ve spent time recently playing around with WebAssembly (WASM) and waPC. Rust and WASM were born at Mozilla and there’s a natural affinity with writing WASM binaries in Rust. In the WASM examples I’ve been using for WASM Transparency, waPC and MsgPack and waPC and Protobufs.
I’ve created 3 WASM binaries: complex.wasm
, simplex.wasm
and fabcar.wasm
and each is about 2.5MB when:
cargo build --target=wasm32-unknown-unknown --release
The Rust and WebAssembly book has an excellent section titled Shrinking .wasm.
Code Size. So, let’s see what help that provides.
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Envoy WASM filters in Rust
A digression thanks to Sal Rashid who’s exploring WASM filters w/ Envoy.
The documentation is sparse but:
There is a Rust SDK but it’s not documented:
I found two useful posts by Rustaceans who were able to make use of it:
Here’s my simple use of the SDK’s examples.
wasme
curl -sL https://run.solo.io/wasme/install | sh
PATH=${PATH}:${HOME}/.wasme/bin
wasme --version
It may be possible to avoid creating an account on WebAssemblyHub if you’re staying local.
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
Rust Crate Transparency && Rust SDK for Google Trillian
I’m noodling the utility of a Transparency solution for Rust Crates. When developers push crates to Cargo, a bunch of metadata is associated with the crate. E.g. protobuf
. As with Golang Modules, Python packages on PyPi etc., there appears to be utility in making tamperproof recordings of these publications. Then, other developers may confirm that a crate pulled from cates.io is highly unlikely to have been changed.
On Linux, Cargo stores downloaded crates under ${HOME}/.crates/registry
. In the case of the latest version (2.12.0
) of protobuf
, on my machine, I have:
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
PyPi Transparency Client (Rust)
I’ve finally being able to hack my way through to a working Rust gRPC client (for PyPi Transparency).
It’s not very good: poorly structured, hacky etc. but it serves the purpose of giving me a foothold into Rust development so that I can evolve it as I learn the language and its practices.
There are several Rust crates (SDK) for gRPC. There’s no sanctioned SDK for Rust on grpc.io.
I chose stepancheg’s grpc-rust because it’s a pure Rust implementation (not built atop the C implementation).
Tag: Tonic-Web
gRPC-Web w/ FauxRPC and Rust
After recently discovering FauxRPC, I was sufficiently impressed that I decided to use it to test Ackal’s gRPC services using rust.
FauxRPC provides multi-protocol support and so, after successfully implementing the faux gRPC client tests, I was compelled to try gRPC-Web too. For no immediate benefit other than, it’s there, it’s free and it’s interesting. As an aside, the faux REST client tests worked without issue using Reqwest.
Unfortunately, my optimism hit a wall with gRPC-Web and what follows is a summary of my unresolved issue.
Tag: Golang
FauxRPC using gRPCurl, Golang and rust
Read FauxRPC + Testcontainers on Hacker News and was intrigued. I spent a little more time “evaluating” this than I’d planned because I’m forcing myself to use rust as much as possible and my ignorance (see below) caused me some challenges.
The technology is interesting and works well. The experience helped me explore Testcontainers too which I’d heard about but not explored until this week.
For my future self:
What | What? |
---|---|
FauxRPC | A general-purpose tool (built using Buf’s Connect) that includes registry and stub (gRPC) services that can be (programmatically) configured (using a Protobuf descriptor) and stubs (example method responses) to help test gRPC implementations. |
Testcontainers | Write code (e.g. rust) to create and interact (test)services (running in [Docker] containers). |
Connect | (More than a) gRPC implementation used by FauxRPC |
gRPCurl | A command-line gRPC tool. |
I started following along with FauxRPC’s Testcontainers example but, being unfamiliar with Connect, I wasn’t familiar with its Eliza service. The service is available on demo.connectrpc.com:443
and is described using eliza.proto
as part of examples-go
. Had I realized this sooner, I would have used this example rather than the Health Checking protocol.
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Navigating Koyeb's Golang SDK
Ackal deploys gRPC Health Checking clients in locations around the World in order to health check services that are representative of customer need.
Koyeb offers multiple locations and I spent time today writing a client for Ackal to integrate with Koyeb using the Golang client for the Koyeb API.
The SDK is generated from Koyeb’s OpenAPI (nee Swagger) endpoint using openapi-generator-cli
. This is a smart, programmatic solution to ensuring that the SDK always matches the API definition but I found the result is idiosyncratic and therefore a little gnarly.
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
Maintaining Container Images
As I contemplate moving my “thing” into production, I’m anticipating aspects of the application that need maintenance and how this can be automated.
I’d been negligent in the maintenance of some of my container images.
I’m using mostly Go and some Rust as the basis of static(ally-compiled) binaries that run in these containers but not every container has a base image of scratch
. scratch
is the only base image that doesn’t change and thus the only base image that doesn’t require that container images buit FROM
it, be maintained.
Delegate domain-wide authority using Golang
I’d not used Google’s Domain-wide Delegation from Golang and struggled to find example code.
Google provides Java and Python samples.
Google has a myriad packages implementing its OAuth security and it’s always daunting trying to determine which one to use.
As it happens, I backed into the solution through client.Options
ctx := context.Background()
// Google Workspace APIS don't use IAM do use OAuth scopes
// Scopes used here must be reflected in the scopes on the
// Google Workspace Domain-wide Delegate client
scopes := []string{ ... }
// Delegates on behalf of this Google Workspace user
subject := "a@google-workspace-email.com"
creds, _ := google.FindDefaultCredentialsWithParams(
ctx,
google.CredentialsParams{
Scopes: scopes,
Subject: subject,
},
)
opts := option.WithCredentials(creds)
service, _ := admin.NewService(ctx, opts)
In this case NewService
applies to Google’s Golang Admin SDK API although the pattern of NewService(ctx)
or NewService(ctx, opts)
where opts
is a option.ClientOption
is consistent across Google’s Golang libraries.
Basic programmatic access to GitHub Issues
It’s been a while!
I’ve been spending time writing Bash scripts and a web site but neither has been sufficiently creative that I’ve felt worth a blog post.
As I’ve been finalizing the web site, I needed an Issue Tracker and decided to leverage GitHub(’s Issues).
As a former Googler, I’m familiar with Google’s (excellent) internal issue tracking tool (Buganizer) and it’s public manifestation Issue Tracker. Google documents Issue Tracker and its Issue type which I’ve mercilessly plagiarized in my implementation.
Golang Structured Logging w/ Google Cloud Logging (2)
UPDATE There’s an issue with my naive implementation of
RenderValuesHook
as described in this post. I summarized the problem in this issue where I’ve outlined (hopefully) a more robust solution.
Recently, I described how to configure Golang logging so that user-defined key-values applied to the logs are parsed when ingested by Google Cloud Logging.
Here’s an example of what we’re trying to achieve. This is an example Cloud Logging log entry that incorporates user-defined labels (see dog:freddie
and foo:bar
) and a readily-querable jsonPayload
:
Golang Structured Logging w/ Google Cloud Logging
I’ve multiple components in an app and these are deployed across multiple Google Cloud Platform (GCP) services: Kubernetes Engine, Cloud Functions, Cloud Run, etc. Almost everything is written in Golang and I started the project using go-logr
.
logr
is in two parts: a Logger
that you use to write log entries; a LogSink
(adaptor) that consumes log entries and outputs them to a specific log implementation.
Initially, I defaulted to using stdr
which is a LogSink
for Go’s standard logging implementation. Something similar to the module’s example:
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Stripe
It’s been almost a month since my last post. I’ve been occupied learning Stripe and integrating it into an application that I’m developing. The app benefits from a billing mechanism for prospective customers and, as far as I can tell, Stripe is the solution. I’d be interested in hearing perspectives on alternatives.
As with any platform, there’s good and bad and I’ll summarize my perspective on Stripe here. It’s been some time since I developed in JavaScript and this lack of familiarity has meant that the solution took longer than I wanted to develop. That said, before this component, I developed integration with Firebase Authentication and that required JavaScript’ing too and that was much easier (and more enjoyable).
Struggling with Golang structs
Julia’s post Blog about what you’ve struggled with resonates because I’ve been struggling with Golang structs in a project. Not the definitions of structs but seemingly needing to reproduce them across the project. I realize that each instance of these resources differs from the others but I’m particularly concerned by having to duplicate method implementations on them.
I’m kinda hoping that I see the solution to my problem by writing it out. If you’re reading this, I didn’t :-(
Firestore Golang Timestamps & Merging
I’m using Google’s Golang SDK for Firestore. The experience is excellent and I’m quickly becoming a fan of Firestore. However, as a Golang Firestore developer, I’m feeling less loved and some of the concepts in the database were causing me a conundrum.
I’m still not entirely certain that I have Timestamps nailed but… I learned an important lesson on the auto-creation of Timestamps in documents and how to retain these values.
Cloud Firestore Triggers in Golang
I was pleased to discover that Google provides a non-Node.JS mechanism to subscribe to and act upon Firestore triggers, Google Cloud Firestore Triggers. I’ve nothing against Node.JS but, for the project i’m developing, everything else is written in Golang. It’s good to keep it all in one language.
I’m perplexed that Cloud Functions still (!) only supports Go 1.13 (03-Sep-2019). Even Go 1.14 (25-Feb-2020) was released pre-pandemic and we’re now running on 1.16. Come on Google!
Using Golang with the Firestore Emulator
This works great but it wasn’t clearly documented for non-Firebase users. I assume it will work, as well, for any of the client libraries (not just Golang).
Assuming you have some (Golang) code (in this case using the Google Cloud Client Library) that interacts with a Firestore database. Something of the form:
package main
import (
"context"
"crypto/sha256"
"fmt"
"log"
"os"
"time"
"cloud.google.com/go/firestore"
)
func hash(s string) string {
h := sha256.New()
h.Write([]byte(s))
return fmt.Sprintf("%x", h.Sum(nil))
}
type Dog struct {
Name string `firestore:"name"`
Age int `firestore:"age"`
Human *firestore.DocumentRef `firestore:"human"`
Created time.Time `firestore:"created"`
}
func NewDog(name string, age int, human *firestore.DocumentRef) Dog {
return Dog{
Name: name,
Age: age,
Human: human,
Created: time.Now(),
}
}
func (d *Dog) ID() string {
return hash(d.Name)
}
type Human struct {
Name string `firestore:"name"`
}
func (h *Human) ID() string {
return hash(h.Name)
}
func main() {
ctx := context.Background()
project := os.Getenv("PROJECT")
client, err := firestore.NewClient(ctx, project)
if value := os.Getenv("FIRESTORE_EMULATOR_HOST"); value != "" {
log.Printf("Using Firestore Emulator: %s", value)
}
if err != nil {
log.Fatal(err)
}
defer client.Close()
me := Human{
Name: "me",
}
meDocRef := client.Collection("humans").Doc(me.ID())
if _, err := meDocRef.Set(ctx, me); err != nil {
log.Fatal(err)
}
freddie := NewDog("Freddie", 2, meDocRef)
freddieDocRef := client.Collection("dogs").Doc(freddie.ID())
if _, err := freddieDocRef.Set(ctx, freddie); err != nil {
log.Fatal(err)
}
}
Then you can interact instead with the Firestore Emulator.
Programmatically deploying Cloud Run services (Golang|Python)
Phew! Programmitcally deploying Cloud Run services should be easy but it didn’t find it so.
My issues were that the Cloud Run Admin (!) API is poorly documented and it uses non-standard endpoints (thanks Sal!). Here, for others who may struggle with this, is how I got this to work.
Goal
Programmatically (have Golang, Python, want Rust) deploy services to Cloud Run.
i.e. achieve this:
gcloud run deploy ${NAME} \
--image=${IMAGE} \
--platform=managed \
--no-allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
TRICK
--log-http
is your friend
Dapr
It’s a good name, I read it as “dapper” but I frequently type “darp” :-(
Was interested to read that Dapr is now v1.0 and decided to check it out. I was initially confused between Dapr and service mesh functionality. But, having used Dapr, it appears to be more focused in aiding the development of (cloud-native) (distributed) apps by providing developers with abstractions for e.g. service discovery, eventing, observability whereas service meshes feel (!) more oriented towards simplifying the deployment of existing apps. Both use the concept of proxies, deployed alongside app components (as sidecars on Kubernetes) to provide their functionality to apps.
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
GitHub Actions && GitHub Container Registry
You know when you start something and then regret it!? I think I’ll be sticking with Google Cloud Build; GitHub Actions appears functional and useful but I found the documentation to be confusing and limited and struggled to get a simple container image build|push working.
I’ve long used Docker Hub but am planning to use it less as a result of the planned changes. I want to see Docker succeed and to do so it needs to find a way to make money but, there are free alternatives including the new GitHub Container Registry and the very very cheap Google Container Registry.
Trillian Map Mode
Chatting with one of Google’s Trillian team, I was encouraged to explore Trillian’s Map Mode. The following is the result of some spelunking through this unfamiliar cave. I can’t provide any guarantee that this usage is correct or sufficient.
Here’s the repo: https://github.com/DazWilkin/go-trillian-map
I’ve written about Trillian Log Mode elsewhere.
I uncovered use of Trillian Map Mode through Trillian’s integration tests. I’m unclear on the distinction between TrillianMapClient
and TrillianMapWriteClient
but the latter served most of my needs.
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Golang Protobuf APIv2
Google has a new Golang Protobuf API, APIv2 (google.golang.org/protobuf) superseding APIv1 (github.com/golang/protobuf). If your code is importing github.com/golang/protobuf
, you’re using APIv2. Otherwise, you should consult with the docs because Google reimplemented APIv1 atop APIv2. One challenge this caused me, as someone who does not use protobufs and gRPC on a daily basis, is that gRPC (code-generation) is being removed from the (Golang) protoc-gen-go
, the Golang plugin that generates gRPC service bindings.
Golang Xiaomi Bluetooth Temperature|Humidity (LYWSD03MMC) 2nd Gen
Well, this became more of an adventure that I’d originally wanted but, after learning some BLE and, with the help of others (Thanks Jonatha, JsBergbau), I’ve sample code that connects to 4 Xiaomi 2nd gen. Thermometers, subscribes to readings and publishes the data to MQTT. From there, I’m scraping it using Inuits MQTTGateway into Prometheus.
Repo: https://github.com/DazWilkin/gomijia2
Thanks|Credit:
- Jonathan McDowell for gomijia and help
- JsBergbau for help
Background
I’ve been playing around with ESPHome and blogged around my very positive experience ESPHome, MQTT, Prometheus and almost Cloud IoT. I’ve ordered a couple of ESP32-DevKitC and hope this will enable me to connect to Google Cloud IoT.
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Google's New Golang SDK for Protobufs
Google has released a new Golang SDK for protobuf. In the [announcement], a useful tool to redact properties is described. If like me, this is somewhat novel to you, here’s a mashup using Google’s Protocol Buffer Basics w/ redaction.
To be very clear, as it’s an important distinction:
Version | Repo | Docs |
---|---|---|
v2 | google.golang.org/protobuf | Docs |
v1 | github.com/golang/protobuf | Docs |
Project
Here’s my project structure:
.
├── protoc-3.11.4-linux-x86_64
│ ├── bin
│ │ └── protoc
│ ├── include
│ │ └── google
│ └── readme.txt
└── src
├── go.mod
├── go.sum
├── main.go
├── protos
│ ├── addressbook.pb.go
│ └── addressbook.proto
└── README.md
You may structure as you wish.
OriginStamp: Verifying Proofs
Recently, I wrote about some initial adventures with OriginStamp
Using OriginStamp’s UI or API, submitting a hash results in transactions being submitted to Bitcoin, Ethereum and a German newspaper.
Using the API, it’s possible to query OriginStamp’s service for a proof. This post explains how to verify such a proof.
The diligent reader among you (Hey Mom!) will recall that I submitted a hash for the message:
Frederik Jack is a bubbly Border Collie
The SHA-256 hash of this message is:
FreeTSA & Digitorus' Timestamp SDK
I wrote recently about some exploration of Timestamping with OriginStamp. Since writing that post, I had some supportive feedback from the helpful folks at OriginStamp and plan to continue exploring that solution.
Meanwhile, OriginStamp exposed me to timestamping and trusted timestamping and I discovered freeTSA.org.
What’s the point? These services provide authoritative proof of the existence of a digital asset before some point in time; OriginStamp provides a richer service and uses multiple timestamp authorities including Bitcoin, Ethereum and rather interestingly a German Newspaper’s Trusted Timestamp.
OriginStamp Python|Golang SDK Examples
A friend mentioned OriginStamp to me.
NB There are 2 sites: originstamp.com and originstamp.org.
It’s an interesting project.
It’s a solution for providing auditable proof that you had a(ccess to) some digital thing before a certain date. OriginStamp provides user-|developer-friendly means to submit files|hashes (of your content) and have these bundled into transactions that are submitted to e.g. bitcoin.
I won’t attempt to duplicate the narrative here, review OriginStamp’s site and some of its content.
Cloud Build wishlist: Mountable Golang Modules Proxy
I think it would be valuable if Google were to provide volumes in Cloud Build of package registries (e.g. Go Modules; PyPi; Maven; NPM etc.).
Google provides a mirror of a subset of Docker Hub. This confers several benefits: Google’s imprimatur; speed (latency); bandwidth; and convenience.
The same benefits would apply to package registries.
In the meantime, there’s a hacky way to gain some of the benefits of these when using Cloud Build.
In the following example, I’ll show an approach using Golang Modules and Google’s module proxy aka proxy.golang.org
.
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Google Fit
I’ve spent a few days exploring [Google Fit SDK] as I try to wean myself from my obsession with metrics (of all forms). A quick Googling got me to Robert’s Exporter Google Fit Daily Steps, Weight and Distance to a Google Sheet. This works and is probably where I should have stopped… avoiding the rabbit hole that I’ve been down…
I threw together a simple Golang implementation of the SDK using Google’s Golang API Client Library. Thanks to Robert’s example, I was able to infer some of the complexity this API particularly in its use of data types, data sources and datasets. Having used Stackdriver in my previous life, Google Fit’s structure bears more than a passing resemblance to Stackdriver’s data model and its use of resource types and metric types.
Google Home Exporter
I’m obsessing over Prometheus exporters. First came Linode Exporter, then GCP Exporter and, on Sunday, I stumbled upon a reverse-engineered API for Google Home devices and so wrote a very basic Google Home SDK and a similarly basic Google Home Exporter:
The SDK only implements /setup/eureka_info
and then only some of the returned properties. There’s not a lot of metric-like data to use besides SignalLevel
(signal_level
) and NoiseLevel
(noise_level
). I’m not clear on the meaning of some of the properties.
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Linode Prometheus Exporter
I enjoy using Prometheus and have toyed around with it for some time particularly in combination with Kubernetes. I signed up with Linode [referral] compelled by the addition of a managed Kubernetes service called Linode Kubernetes Engine (LKE). I have an anxiety that I’ll inadvertently leave resources running (unused) on a cloud platform. Instead of refreshing the relevant billing page, it struck me that Prometheus may (not yet proven) help.
The hypothesis is that a combination of a cloud-specific Prometheus exporter reporting aggregate uses of e.g. Linodes (instances), NodeBalancers, Kubernetes clusters etc., could form the basis of an alert mechanism using Prometheus’ alerting.
PyPi Transparency
I’ve been noodling around with another Trillian personality.
Another in a theme that interests me in providing tamperproof logs for the packages in the popular package management registries.
The Golang team recently announced Go Module Mirror which is built atop Trillian. It seems to me that all the package registries (Go Modules, npm, Maven, NuGet etc.) would benefit from tamperproof logs hosted by a trusted 3rd-party.
As you may have guessed, PyPi Transparency is a log for PyPi packages. PyPi is comprehensive, definitive and trusted but, as with Go Module Mirror, it doesn’t hurt to provide a backup of some of its data. In the case of this solution, Trillian hosts a log of self-calculated SHA-256 hashes for Python packages that are added to it.
Cloud Functions Simple(st) HTTP Multi-host Proxy
Tweaked yesterday’s solution so that it will randomly select one from several hosts with which it’s configured.
package proxy
import (
"log"
"math/rand"
"net/http"
"net/url"
"os"
"strings"
"time"
)
func robin() {
hostsList := os.Getenv("PROXY_HOST")
if hostsList == "" {
log.Fatal("'PROXY_HOST' environment variable should contain comma-separated list of hosts")
}
// Comma-separated lists of hosts
hosts := strings.Split(hostsList, ",")
urls := make([]*url.URL, len(hosts))
for i, host := range hosts {
var origin = Endpoint{
Host: host,
Port: os.Getenv("PROXY_PORT"),
}
url, err := origin.URL()
if err != nil {
log.Fatal(err)
}
urls[i] = url
}
s := rand.NewSource(time.Now().UnixNano())
q := rand.New(s)
Handler = func(w http.ResponseWriter, r *http.Request) {
// Pick one of the URLs at random
url := urls[q.Int31n(int32(len(urls)))]
log.Printf("[Handler] Forwarding: %s", url.String())
// Forward to it
reverseproxy(url, w, r)
}
}
This requires a minor tweak to the deployment to escape the commas within the PROXY_HOST
string to disambiguate these for gcloud
:
Cloud Functions Simple(st) HTTP Proxy
I’m investigating the use of LetsEncrypt for gRPC services. I found this straightforward post by Scott Devoid and am going to try this approach.
Before I can do that, I need to be able to publish services (make them Internet-accessible) and would like to try to continue to use GCP for free.
Some time ago, I wrote about using the excellent Microk8s on GCP. Using an f1-micro
, I’m hoping (!) to stay within the Compute Engine free tier. I’ll also try to be diligent and delete the instance when it’s not needed. This gives me a runtime platform and I can expose services to the Instance’s (Node)Ports but, I’d prefer to not be billed for a simple proxy.
pypi-transparency
The goal of pypi-transparency is very similar to the underlying motivation for the Golang team’s Checksum Database (also built with Trillian).
Even though, PyPi provides hashes of the content of packages it hosts, the developer must trust that PyPi’s data is consistent. One ambition with pypi-transparency is to provide a companion, tamperproof log of PyPi package files in order to provide a double-check of these hashes.
It is important to understand what this does (and does not) provide. There’s no validation of a package’s content. The only calculation is that, on first observation, a SHA-256 hash is computed of the package’s content and the hash is recorded. If the package is subsequently altered, it’s very probable that the hash will change and this provides a signal to the user that the package’s contents has changed. Because pypi-transparency uses a tamperproof log, it’s very difficult to update the hash recorded in the tamperproof log, to reflect this change. Corrolary: pypi-transparency will record the hashes of packages that include malicious code.
Tag: GRPCurl
FauxRPC using gRPCurl, Golang and rust
Read FauxRPC + Testcontainers on Hacker News and was intrigued. I spent a little more time “evaluating” this than I’d planned because I’m forcing myself to use rust as much as possible and my ignorance (see below) caused me some challenges.
The technology is interesting and works well. The experience helped me explore Testcontainers too which I’d heard about but not explored until this week.
For my future self:
What | What? |
---|---|
FauxRPC | A general-purpose tool (built using Buf’s Connect) that includes registry and stub (gRPC) services that can be (programmatically) configured (using a Protobuf descriptor) and stubs (example method responses) to help test gRPC implementations. |
Testcontainers | Write code (e.g. rust) to create and interact (test)services (running in [Docker] containers). |
Connect | (More than a) gRPC implementation used by FauxRPC |
gRPCurl | A command-line gRPC tool. |
I started following along with FauxRPC’s Testcontainers example but, being unfamiliar with Connect, I wasn’t familiar with its Eliza service. The service is available on demo.connectrpc.com:443
and is described using eliza.proto
as part of examples-go
. Had I realized this sooner, I would have used this example rather than the Health Checking protocol.
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Tag: Cloud-Run
Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create
and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet
) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet
:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Cloud Run custom domain mappings
I have several Cloud Run services that I want to map to a domain.
During development, I create a Google Cloud Platform (GCP) project each day into which everything is deployed. This means that, every day, the Cloud Run services have newly non-inferable (to me) URLs. I thought this would be tedious to manage because:
- My DNS service isn’t programmable (I know!)
- Cloud Run services have non-inferable (by me) URLs
i.e. I thought I’d have to manually update the DNS entries each day.
Prometheus HTTP Service Discovery of Cloud Run services
Some time ago, I wrote about using Prometheus Service Discovery w/ Consul for Cloud Run and also Scraping metrics exposed by Google Cloud Run services that require authentication. Both solutions remain viable but they didn’t address another use case for Prometheus and Cloud Run services that I have with a “thing” that I’ve been building.
In this scenario, I want to:
- Configure Prometheus to scrape Cloud Run service metrics
- Discover Cloud Run services dynamically
- Authenticate to Cloud Run using Firebase Auth ID tokens
These requirements and – one other – present several challenges:
Firebase Auth authorized domains
I’m using Firebase Authentication in a project to authenticate users of various OAuth2 identity systems. Firebase Authentication requires a set of Authorized Domains.
The (web) app that interacts with Firebase Authentication is deployed to Cloud Run. The Authorized Domains list must include the app’s Cloud Run service URL.
Cloud Run service URLs vary by Project (ID). They are a combination of the service name, a hash (?) of the Project (ID) and .a.run.app
.
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
`gcloud beta run services replace`
TL;DR I’m working on a project that includes multiple Cloud Run services. I’ve been putting my
gcloud
head on to deploy these services thinking that it’s curious there’s no way to write the specs as YAML configs. Today, I learned that there is:gcloud beta run services replace
.
What prompted the discovery was some frustration trying to deploy a JSON-valued environment variable to Cloud Run:
local FIREBASE_CONFIG="{
apiKey: ${FIREBASE_API_KEY},
authDomain: ${FIREBASE_AUTH_DOMAIN},
projectId: ${FIREBASE_PROJECT},
storageBucket: ${FIREBASE_STORAGE_BUCKET},
messagingSenderId: ${FIREBASE_MESSAGING_SENDER},
appId: ${FIREBASE_APP}}"
gcloud run deploy ${SRV_NAME} \
--image=${IMAGE} \
--command="/server" \
--args="--endpoint=:${PORT}" \
--set-env-vars=FIREBASE_CONFIG="${FIREBASE_CONFIG}" \
--max-instances=1 \
--memory=256Mi \
--ingress=all \
--platform=managed \
--port=${PORT} \
--allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
gcloud
balks at this.
Cloud Endpoints combine OpenAPI and gRPC... or not!
See:
- Multiplexing gRPC and HTTP endpoints with Cloud Run
- gRPC, Cloud Run & Endpoints
- ESPv2: Configure Cloud Endpoints to proxy traffic to a Cloud Run multiplexed (gRPC|HTTP) service
Challenges:
- Cloud Run permits single port
- Cloud Run services publishing e.g. gRPC and Prometheus, must multiplex transports
- Cloud Run services publishing multiplexed transports are challenging to expose using Cloud Endpoints
Hypothesis #1: Multiplexed transports work with Cloud Run
Consul discovers Google Cloud Run
I’ve written a basic discoverer
of Google Cloud Run services. This is for a project and it extends work done in some previous posts to Multiplex gRPC and Prometheus with Cloud Run and to use Consul for Prometheus service discovery.
This solution:
- Accepts a set of Google Cloud Platform (GCP) projects
- Trawls them for Cloud Run services
- Assumes that the services expose Prometheus metrics on
:443/metrics
- Relabels the services
- Surfaces any discovered Cloud Run services’ metrics in Prometheus
You’ll need Docker and Docker Compose.
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Prometheus Service Discovery w/ Consul for Cloud Run
I’m working on a project that will programmatically create Google Cloud Run services and I want to be able to dynamically discover these services using Prometheus.
This is one solution.
NOTE Google Cloud Run is the service I’m using, but the principle described herein applies to any runtime service that you’d wish to use.
Why is this challenging? IIUC, it’s primarily because Prometheus has a limited set of plugins for service discovery, see the sections that include _sd_
in Prometheus Configuration documentation. Unfortunately, Cloud Run is not explicitly supported. The alternative appears to be to use file-based discovery but this seems ‘challenging’; it requires, for example, reloading Prometheus on file changes.
Programmatically deploying Cloud Run services (Golang|Python)
Phew! Programmitcally deploying Cloud Run services should be easy but it didn’t find it so.
My issues were that the Cloud Run Admin (!) API is poorly documented and it uses non-standard endpoints (thanks Sal!). Here, for others who may struggle with this, is how I got this to work.
Goal
Programmatically (have Golang, Python, want Rust) deploy services to Cloud Run.
i.e. achieve this:
gcloud run deploy ${NAME} \
--image=${IMAGE} \
--platform=managed \
--no-allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
TRICK
--log-http
is your friend
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Tag: GRPC
Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create
and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet
) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet
:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Securing gRPC services using Tailscale
This is so useful that it’s worth its own post.
I write many gRPC services. As these generally run securely, it’s best to test them that way too but, even with e.g. Let’s Encrypt, it can be challenging to generate appropriate TLS certs.
Tailscale makes this trivial.
Assuming there’s a gRPC service running on localhost:50051
, we want to avoid -plaintext
:
PORT="50051"
grpcurl \
-plaintext 0.0.0.0:${PORT} \
list
NOTE I’m using
list
and assuming your service has reflection enabled but you can, of course, use relevant methods.
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Azure Container Apps
The majority of Ackal’s components are deployed to Google Cloud. However, by its nature, Ackal benefits from deployments that span cloud platforms. I’ve deployed Ackal’s gRPC health checks to Fly, and managed Kubernetes services on Linode and Vultr.
Today, I decided to revisit¹ Azure. Ackal uses Azure (Active Directory) for one of its OAuth providers. This time, I wanted to deploy a containerized gRPC service. Azure provides several container-oriented services. I decided to use Azure Container Apps and, in hindsight, find it analogous to Google Cloud Run.
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!
Firebase Authentication, Cloud Endpoints and gRPC (1of2)
I’m building a service that requires user authentication. The primary endpoint is a gRPC-based service. I would like to consider using certificate-based auth but this feels… challenging. Instead, I have been aware of, but never used, Firebase Authentication and was interested to see that Cloud Endpoints includes Firebase Authentication as one of its supported auth mechanisms. Curiosity piqued, I confirmed that gRPC supports Google token-based authentication.
The following is a summary of what I did but I’ll leave the extensive documentation to Google, (Google’s) Firebase and gRPC, all of which, in this case, provide really good explanations.
Cloud Endpoints combine OpenAPI and gRPC... or not!
See:
- Multiplexing gRPC and HTTP endpoints with Cloud Run
- gRPC, Cloud Run & Endpoints
- ESPv2: Configure Cloud Endpoints to proxy traffic to a Cloud Run multiplexed (gRPC|HTTP) service
Challenges:
- Cloud Run permits single port
- Cloud Run services publishing e.g. gRPC and Prometheus, must multiplex transports
- Cloud Run services publishing multiplexed transports are challenging to expose using Cloud Endpoints
Hypothesis #1: Multiplexed transports work with Cloud Run
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Fly.io
I spent some time over the weekend understanding Fly.io. It’s always fascinating to me how many smart people are building really neat solutions. Fly.io is subtly different to other platforms that I use (Kubernetes, GCP, DO, Linode) and I’ve found the Fly.io team to be highly responsive and helpful to my noob questions.
One of the team’s posts, Docker without Docker surfaced in my Feedly feed (hackernews) and it piqued my interest.
Dapr
It’s a good name, I read it as “dapper” but I frequently type “darp” :-(
Was interested to read that Dapr is now v1.0 and decided to check it out. I was initially confused between Dapr and service mesh functionality. But, having used Dapr, it appears to be more focused in aiding the development of (cloud-native) (distributed) apps by providing developers with abstractions for e.g. service discovery, eventing, observability whereas service meshes feel (!) more oriented towards simplifying the deployment of existing apps. Both use the concept of proxies, deployed alongside app components (as sidecars on Kubernetes) to provide their functionality to apps.
Remotely invoking WASM functions using gRPC and waPC
Following on from waPC & Protobufs, I can now remotely invoke (arbitrary) WASM functions:
Client:
The logging isn’t perfectly clear but, the client gets (a previously added) WASM binary from the server (using the SHA-256 of the WASM binary as a unique identifier). The result includes metadata that includes a protobuf descriptor of the WASM binary’s functions. The descriptor defines gRPC services (that represent the WASM functions) with input (parameters) and output (results) messages.
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
Rust Crate Transparency && Rust SDK for Google Trillian
I’m noodling the utility of a Transparency solution for Rust Crates. When developers push crates to Cargo, a bunch of metadata is associated with the crate. E.g. protobuf
. As with Golang Modules, Python packages on PyPi etc., there appears to be utility in making tamperproof recordings of these publications. Then, other developers may confirm that a crate pulled from cates.io is highly unlikely to have been changed.
On Linux, Cargo stores downloaded crates under ${HOME}/.crates/registry
. In the case of the latest version (2.12.0
) of protobuf
, on my machine, I have:
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Cloud Functions Simple(st) HTTP Multi-host Proxy
Tweaked yesterday’s solution so that it will randomly select one from several hosts with which it’s configured.
package proxy
import (
"log"
"math/rand"
"net/http"
"net/url"
"os"
"strings"
"time"
)
func robin() {
hostsList := os.Getenv("PROXY_HOST")
if hostsList == "" {
log.Fatal("'PROXY_HOST' environment variable should contain comma-separated list of hosts")
}
// Comma-separated lists of hosts
hosts := strings.Split(hostsList, ",")
urls := make([]*url.URL, len(hosts))
for i, host := range hosts {
var origin = Endpoint{
Host: host,
Port: os.Getenv("PROXY_PORT"),
}
url, err := origin.URL()
if err != nil {
log.Fatal(err)
}
urls[i] = url
}
s := rand.NewSource(time.Now().UnixNano())
q := rand.New(s)
Handler = func(w http.ResponseWriter, r *http.Request) {
// Pick one of the URLs at random
url := urls[q.Int31n(int32(len(urls)))]
log.Printf("[Handler] Forwarding: %s", url.String())
// Forward to it
reverseproxy(url, w, r)
}
}
This requires a minor tweak to the deployment to escape the commas within the PROXY_HOST
string to disambiguate these for gcloud
:
Cloud Functions Simple(st) HTTP Proxy
I’m investigating the use of LetsEncrypt for gRPC services. I found this straightforward post by Scott Devoid and am going to try this approach.
Before I can do that, I need to be able to publish services (make them Internet-accessible) and would like to try to continue to use GCP for free.
Some time ago, I wrote about using the excellent Microk8s on GCP. Using an f1-micro
, I’m hoping (!) to stay within the Compute Engine free tier. I’ll also try to be diligent and delete the instance when it’s not needed. This gives me a runtime platform and I can expose services to the Instance’s (Node)Ports but, I’d prefer to not be billed for a simple proxy.
Tag: Probe
Cloud Run with a gRPC probe
Cloud Run supports gRPC startup|liveness probes which I’d not used before.
I’m using Cloud Run v2 and specifically projects.locations.services.create
and Service
PROJECT="..."
REGION="..."
REPO=".."
# Must be in an Artifact Registry repo
IMAGE="${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/..."
# Run v2
ENDPOINT="https://run.googleapis.com/v2"
PARENT="projects/${PROJECT}/locations/${REGION}"
SERVICE="..."
I like to use Jsonnet (specifically go-jsonnet
) to help templating Kubernetes(-like) deployments.
cloudrun.jsonnet
:
local project = std.extVar("project");
local region = std.extVar("region");
local service = std.extVar("service");
local image = std.extVar("image");
local port = 8080;
local health_checking_service = "foo";
{
"labels":{
"type": "test"
},
"annotations": {
"type": "test"
},
"template":{
"containers": {
"name": service,
"image": image,
"args": [],
"resources": {
"limits": {
"cpu": "1000m",
"memory": "512Mi"
}
},
"ports": [
{
"name": "http1",
"containerPort": port
}
],
"startupProbe": {
"grpc": {
"port": port,
"service": health_checking_service
}
}
},
"scaling": {
"maxInstanceCount": 1
}
}
}
And deploy it using:
Tag: Trivy
Trivy vulnerability scanning
I build (and therefore) manage many container images. It’s easy (common?) to overlook that these images contain vulnerabilities, hopefully vulns that are fixed and that the images must be rebuilt to accommodate these changes.
I have used Google’s very expensive container vulnerability scanning tool but wanted something cheaper. I found this list of open source solutions on Reddit and decided to look into Trivy.
It’s possible to install Trivy via a package manager, a binary or to build the Go binary locally but I prefer to use containers whenever possible:
Tag: Vulnerability Scanning
Trivy vulnerability scanning
I build (and therefore) manage many container images. It’s easy (common?) to overlook that these images contain vulnerabilities, hopefully vulns that are fixed and that the images must be rebuilt to accommodate these changes.
I have used Google’s very expensive container vulnerability scanning tool but wanted something cheaper. I found this list of open source solutions on Reddit and decided to look into Trivy.
It’s possible to install Trivy via a package manager, a binary or to build the Go binary locally but I prefer to use containers whenever possible:
Tag: Google Cloud Functions
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Tag: Python
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Kubernetes Python SDK w/ CRDs
Responded to Get Custom K8s Resource using Python and found the CustomObjectsApi
documentation unclear.
If you have a cluster and a kubeconfig file with a correctly configured current-context
, so that you can successfully:
PLURAL="checks"
kubectl get ${PLURAL} \
--all-namespaces
NOTE I’m using Ackal’s CRDs in these examples.
Then you can use the following code to access the cluster’s REST API server to enumerate its CRDs:
main.py
:
from __future__ import print_function
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_kube_config()
api = client.CustomObjectsApi()
# Ackal's Group|Version and some Kinds
group = 'ack.al'
version = 'v1'
plurals = ['checks','customers']
for plural in plurals:
try:
resp = api.list_cluster_custom_object(
group,
version,
plural,
)
for item in resp["items"]:
spec = item["spec"]
print(spec)
except ApiException as e:
print(e)
python3 -m venv venv
source venv/bin/activate
python3 -m pip install kubernetes==26.1.0
python3 main.py
That’s all!
Python Protobuf changes
Python’s Protocol Buffers code-generation using protoc
has had significant changes that can cause developers… “challenges”. This post summarizes my experience of these mostly to save me from repreatedly recreating this history for myself when I forget it.
- Version change
- Generated code change
- Implementation Backends
I’ll use this summarized table of proto
and the Pypi library’s history in this post. protoc
refers to the compiler that supports code-generation in multiple languages. protobuf
refers to the corresponding Python (runtime) library on Pypi:
Programmatically deploying Cloud Run services (Golang|Python)
Phew! Programmitcally deploying Cloud Run services should be easy but it didn’t find it so.
My issues were that the Cloud Run Admin (!) API is poorly documented and it uses non-standard endpoints (thanks Sal!). Here, for others who may struggle with this, is how I got this to work.
Goal
Programmatically (have Golang, Python, want Rust) deploy services to Cloud Run.
i.e. achieve this:
gcloud run deploy ${NAME} \
--image=${IMAGE} \
--platform=managed \
--no-allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
TRICK
--log-http
is your friend
OriginStamp Python|Golang SDK Examples
A friend mentioned OriginStamp to me.
NB There are 2 sites: originstamp.com and originstamp.org.
It’s an interesting project.
It’s a solution for providing auditable proof that you had a(ccess to) some digital thing before a certain date. OriginStamp provides user-|developer-friendly means to submit files|hashes (of your content) and have these bundled into transactions that are submitted to e.g. bitcoin.
I won’t attempt to duplicate the narrative here, review OriginStamp’s site and some of its content.
PyPi Transparency
I’ve been noodling around with another Trillian personality.
Another in a theme that interests me in providing tamperproof logs for the packages in the popular package management registries.
The Golang team recently announced Go Module Mirror which is built atop Trillian. It seems to me that all the package registries (Go Modules, npm, Maven, NuGet etc.) would benefit from tamperproof logs hosted by a trusted 3rd-party.
As you may have guessed, PyPi Transparency is a log for PyPi packages. PyPi is comprehensive, definitive and trusted but, as with Go Module Mirror, it doesn’t hurt to provide a backup of some of its data. In the case of this solution, Trillian hosts a log of self-calculated SHA-256 hashes for Python packages that are added to it.
Tag: Tailscale
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Securing gRPC services using Tailscale
This is so useful that it’s worth its own post.
I write many gRPC services. As these generally run securely, it’s best to test them that way too but, even with e.g. Let’s Encrypt, it can be challenging to generate appropriate TLS certs.
Tailscale makes this trivial.
Assuming there’s a gRPC service running on localhost:50051
, we want to avoid -plaintext
:
PORT="50051"
grpcurl \
-plaintext 0.0.0.0:${PORT} \
list
NOTE I’m using
list
and assuming your service has reflection enabled but you can, of course, use relevant methods.
`curl`'ing a Tailscale Webhook
[Tailscale] is really good. I’ve been using it as a virtual private network to span 2 home networks and to securely (!) access my hosts when I’m remote.
Recently Tailscale added Webhook functionality to permit processing subscribed-to (Tailscale) events. I’m always a sucker for a webhook ;-)
Here’s a curl
command to send a test event to a Tailscale Webhook:
URL=""
# From Tailscale's docs
# https://tailscale.com/kb/1213/webhooks/#events-payload
BODY='
[
{
"timestamp": "2022-09-21T13:37:51.658918-04:00",
"version": 1,
"type": "test",
"tailnet": "example.com",
"message": "This is a test event",
"data": null
}
]
'
T=$(date +%s)
V=$(\
printf "${T}.${BODY}" \
| openssl dgst -sha256 -hmac "${SECRET}" -hex -r \
| head --bytes=64)
curl \
--request POST \
--header "Tailscale-Webhook-Signature:t=${T},v1=${V}" \
--header "Content-Type: application/json" \
--data "${BODY}" \
https://${URL}
There must be a better way of extracting the hashed value from the openssl
output.
Tag: XML-RPC
XML-RPC in Rust and Python
A lazy Sunday afternoon and my interest was piqued by XML-RPC
Client
A very basic XML-RPC client wrapped in a Cloud Functions function:
main.py
:
import functions_framework
import os
import xmlrpc.client
endpoint = os.get_env("ENDPOINT")
proxy = xmlrpc.client.ServerProxy(endpoint)
@functions_framework.http
def add(request):
print(request)
rqst = request.get_json(silent=True)
resp = proxy.add(
{"x":{
"real":rqst["x"]["real"],
"imag":rqst["x"]["imag"]
},
"y":{
"real":rqst["y"]["real"],
"imag":rqst["y"]["imag"]
}
})
return resp
requirements.txt
:
functions-framework==3.*
Run it:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --requirement requirements.txt
export ENDPOINT="..."
python3 main.py
Server
Forcing myself to go Rust first and there’s an (old) xml-rpc crate.
Tag: CRD
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Tag: Kubernetes
Using Rust to generate Kubernetes CRD
For the first time, I chose Rust to solve a problem. Until this, I’ve been trying to use Rust to learn the language and to rewrite existing code. But, this problem led me to Rust because my other tools wouldn’t cut it.
The question was how to represent oneof fields in Kubernetes Custom Resource Definitions (CRDs).
CRDs use OpenAPI schema and the YAML that results can be challenging to grok.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: deploymentconfigs.example.com
spec:
group: example.com
names:
categories: []
kind: DeploymentConfig
plural: deploymentconfigs
shortNames: []
singular: deploymentconfig
scope: Namespaced
versions:
- additionalPrinterColumns: []
name: v1alpha1
schema:
openAPIV3Schema:
description: An example schema
properties:
spec:
properties:
deployment_strategy:
oneOf:
- required:
- rolling_update
- required:
- recreate
properties:
recreate:
properties:
something:
format: uint16
minimum: 0.0
type: integer
required:
- something
type: object
rolling_update:
properties:
max_surge:
format: uint16
minimum: 0.0
type: integer
max_unavailable:
format: uint16
minimum: 0.0
type: integer
required:
- max_surge
- max_unavailable
type: object
type: object
required:
- deployment_strategy
type: object
required:
- spec
title: DeploymentConfig
type: object
served: true
storage: true
subresources: {}
I’ve developed several Kubernetes Operators using the Operator SDK in Go (which builds upon Kubebuilder).
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull
-tag
-push
images to registry.fly.io
but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):
MicroK8s operability add-on
Spent time today yak-shaving which resulted in an unplanned migration from MicroK8s ‘prometheus’ add-on to the new and not fully-documented ‘observability’ add-on:
sudo microk8s.enable prometheus
Infer repository core for addon prometheus
DEPRECATION WARNING: 'prometheus' is deprecated and will soon be removed. Please use 'observability' instead.
...
The reason for the name change is unclear.
It’s unclear whether there’s a difference in the primary components that are installed too (I’d thought Grafana wasn’t included in ‘prometheus’), (Grafana) Loki and (Grafana) Tempo definitely weren’t included and I don’t want them either.
Kubernetes Python SDK w/ CRDs
Responded to Get Custom K8s Resource using Python and found the CustomObjectsApi
documentation unclear.
If you have a cluster and a kubeconfig file with a correctly configured current-context
, so that you can successfully:
PLURAL="checks"
kubectl get ${PLURAL} \
--all-namespaces
NOTE I’m using Ackal’s CRDs in these examples.
Then you can use the following code to access the cluster’s REST API server to enumerate its CRDs:
main.py
:
from __future__ import print_function
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_kube_config()
api = client.CustomObjectsApi()
# Ackal's Group|Version and some Kinds
group = 'ack.al'
version = 'v1'
plurals = ['checks','customers']
for plural in plurals:
try:
resp = api.list_cluster_custom_object(
group,
version,
plural,
)
for item in resp["items"]:
spec = item["spec"]
print(spec)
except ApiException as e:
print(e)
python3 -m venv venv
source venv/bin/activate
python3 -m pip install kubernetes==26.1.0
python3 main.py
That’s all!
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager
recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1
and v1
examples from the Stackoverflow question as they should be self-explanatory. I suspect (!?) that I should not have used Kubernete’s (API Server’s) CA for the Webhook but it could well be that I just don’t understand the correct approach.
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf
protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration
) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]]
PROJECT=[[YOUR-PROJECT]]
SERVER="us.gcr.io"
If not already:
gcloud projects create {$PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable containerregistry.googleapis.com \
--project=${PROJECT}
Container Registry
IMAGE="busybox" # Or ...
docker pull ${IMAGE}
docker tag \
${IMAGE} \
${SERVER}/${PROJECT}/${IMAGE}
docker push ${SERVER}/${PROJECT}/${IMAGE}
gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE}
Service Account
Create a service account that’s permitted to download (read-only) images from this project’s registry
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort
and (TCP) LoadBalancer
options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used. But, for the other services, NGINX Ingress is a good option and so I’ve been exploring NGINX Ingress (Controller) link. This Ingress appears to work except that I’ve been unable to get it to work with mutual TLS between the (NGINX) proxy and my backend services.
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
Tag: Delve
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Tag: Dlv
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Tag: Gin
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Tag: JSON-RPC
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
Tag: MicroK8s
Using Delve to debug Go containers on Kubernetes
An interesting question on Stack overflow prompted me to understand how to use Visual Studio Code and Delve to remotely debug a Golang app running on Kubernetes (MicroK8s).
The OP is using Gin which was also new to me so the question gave me an opportunity to try out several things.
Sources
A simple healthz
handler:
package main
import (
"flag"
"log/slog"
"net/http"
"github.com/gin-gonic/gin"
)
var (
addr = flag.String("addr", "0.0.0.0:8080", "HTTP server endpoint")
)
func healthz(c *gin.Context) {
c.String(http.StatusOK, "ok")
}
func main() {
flag.Parse()
router := gin.Default()
router.GET("/fib", handler())
router.GET("healthz", healthz)
slog.Info("Server starting")
slog.Info("Server error",
"err", router.Run(*addr),
)
}
Containerfile
:
MicroK8s operability add-on
Spent time today yak-shaving which resulted in an unplanned migration from MicroK8s ‘prometheus’ add-on to the new and not fully-documented ‘observability’ add-on:
sudo microk8s.enable prometheus
Infer repository core for addon prometheus
DEPRECATION WARNING: 'prometheus' is deprecated and will soon be removed. Please use 'observability' instead.
...
The reason for the name change is unclear.
It’s unclear whether there’s a difference in the primary components that are installed too (I’d thought Grafana wasn’t included in ‘prometheus’), (Grafana) Loki and (Grafana) Tempo definitely weren’t included and I don’t want them either.
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Dapr
It’s a good name, I read it as “dapper” but I frequently type “darp” :-(
Was interested to read that Dapr is now v1.0 and decided to check it out. I was initially confused between Dapr and service mesh functionality. But, having used Dapr, it appears to be more focused in aiding the development of (cloud-native) (distributed) apps by providing developers with abstractions for e.g. service discovery, eventing, observability whereas service meshes feel (!) more oriented towards simplifying the deployment of existing apps. Both use the concept of proxies, deployed alongside app components (as sidecars on Kubernetes) to provide their functionality to apps.
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf
protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration
) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Tag: Dotnet
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Tag: Google
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Filtering metrics w/ Google Managed Prometheus
Google has published two, very good blog posts on cost management:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
This post is about my application cost reductions for Cloud Monitoring for Ackal.
I’m pleased with Google Cloud Managed Service for Prometheus (hereinafter GMP). I’ve a strong preference for letting service providers run components of Ackal that I consider important but non-differentiating.
Maintaining Container Images
As I contemplate moving my “thing” into production, I’m anticipating aspects of the application that need maintenance and how this can be automated.
I’d been negligent in the maintenance of some of my container images.
I’m using mostly Go and some Rust as the basis of static(ally-compiled) binaries that run in these containers but not every container has a base image of scratch
. scratch
is the only base image that doesn’t change and thus the only base image that doesn’t require that container images buit FROM
it, be maintained.
Delegate domain-wide authority using Golang
I’d not used Google’s Domain-wide Delegation from Golang and struggled to find example code.
Google provides Java and Python samples.
Google has a myriad packages implementing its OAuth security and it’s always daunting trying to determine which one to use.
As it happens, I backed into the solution through client.Options
ctx := context.Background()
// Google Workspace APIS don't use IAM do use OAuth scopes
// Scopes used here must be reflected in the scopes on the
// Google Workspace Domain-wide Delegate client
scopes := []string{ ... }
// Delegates on behalf of this Google Workspace user
subject := "a@google-workspace-email.com"
creds, _ := google.FindDefaultCredentialsWithParams(
ctx,
google.CredentialsParams{
Scopes: scopes,
Subject: subject,
},
)
opts := option.WithCredentials(creds)
service, _ := admin.NewService(ctx, opts)
In this case NewService
applies to Google’s Golang Admin SDK API although the pattern of NewService(ctx)
or NewService(ctx, opts)
where opts
is a option.ClientOption
is consistent across Google’s Golang libraries.
Firestore Export & Import
I’m using Firestore to maintain state in my “thing”.
In an attempt to ensure that I’m able to restore the database, I run (Cloud Scheduler) scheduled backups (see Automating Scheduled Firestore Exports and I’ve been testing imports to ensure that the process works.
It does.
I thought I’d document an important but subtle consideration with Firestore exports (which I’d not initially understood).
Google facilitates that backup process with the sibling commands:
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Consul discovers Google Cloud Run
I’ve written a basic discoverer
of Google Cloud Run services. This is for a project and it extends work done in some previous posts to Multiplex gRPC and Prometheus with Cloud Run and to use Consul for Prometheus service discovery.
This solution:
- Accepts a set of Google Cloud Platform (GCP) projects
- Trawls them for Cloud Run services
- Assumes that the services expose Prometheus metrics on
:443/metrics
- Relabels the services
- Surfaces any discovered Cloud Run services’ metrics in Prometheus
You’ll need Docker and Docker Compose.
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Firestore Golang Timestamps & Merging
I’m using Google’s Golang SDK for Firestore. The experience is excellent and I’m quickly becoming a fan of Firestore. However, as a Golang Firestore developer, I’m feeling less loved and some of the concepts in the database were causing me a conundrum.
I’m still not entirely certain that I have Timestamps nailed but… I learned an important lesson on the auto-creation of Timestamps in documents and how to retain these values.
Cloud Firestore Triggers in Golang
I was pleased to discover that Google provides a non-Node.JS mechanism to subscribe to and act upon Firestore triggers, Google Cloud Firestore Triggers. I’ve nothing against Node.JS but, for the project i’m developing, everything else is written in Golang. It’s good to keep it all in one language.
I’m perplexed that Cloud Functions still (!) only supports Go 1.13 (03-Sep-2019). Even Go 1.14 (25-Feb-2020) was released pre-pandemic and we’re now running on 1.16. Come on Google!
Using Golang with the Firestore Emulator
This works great but it wasn’t clearly documented for non-Firebase users. I assume it will work, as well, for any of the client libraries (not just Golang).
Assuming you have some (Golang) code (in this case using the Google Cloud Client Library) that interacts with a Firestore database. Something of the form:
package main
import (
"context"
"crypto/sha256"
"fmt"
"log"
"os"
"time"
"cloud.google.com/go/firestore"
)
func hash(s string) string {
h := sha256.New()
h.Write([]byte(s))
return fmt.Sprintf("%x", h.Sum(nil))
}
type Dog struct {
Name string `firestore:"name"`
Age int `firestore:"age"`
Human *firestore.DocumentRef `firestore:"human"`
Created time.Time `firestore:"created"`
}
func NewDog(name string, age int, human *firestore.DocumentRef) Dog {
return Dog{
Name: name,
Age: age,
Human: human,
Created: time.Now(),
}
}
func (d *Dog) ID() string {
return hash(d.Name)
}
type Human struct {
Name string `firestore:"name"`
}
func (h *Human) ID() string {
return hash(h.Name)
}
func main() {
ctx := context.Background()
project := os.Getenv("PROJECT")
client, err := firestore.NewClient(ctx, project)
if value := os.Getenv("FIRESTORE_EMULATOR_HOST"); value != "" {
log.Printf("Using Firestore Emulator: %s", value)
}
if err != nil {
log.Fatal(err)
}
defer client.Close()
me := Human{
Name: "me",
}
meDocRef := client.Collection("humans").Doc(me.ID())
if _, err := meDocRef.Set(ctx, me); err != nil {
log.Fatal(err)
}
freddie := NewDog("Freddie", 2, meDocRef)
freddieDocRef := client.Collection("dogs").Doc(freddie.ID())
if _, err := freddieDocRef.Set(ctx, freddie); err != nil {
log.Fatal(err)
}
}
Then you can interact instead with the Firestore Emulator.
Programmatically deploying Cloud Run services (Golang|Python)
Phew! Programmitcally deploying Cloud Run services should be easy but it didn’t find it so.
My issues were that the Cloud Run Admin (!) API is poorly documented and it uses non-standard endpoints (thanks Sal!). Here, for others who may struggle with this, is how I got this to work.
Goal
Programmatically (have Golang, Python, want Rust) deploy services to Cloud Run.
i.e. achieve this:
gcloud run deploy ${NAME} \
--image=${IMAGE} \
--platform=managed \
--no-allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
TRICK
--log-http
is your friend
webmention
Let’s see if this works!?
I’ve added the following link to this site’s baseof.html
so that it is now rendered for each page:
<link
href="https://us-central1-webmention.cloudfunctions.net/webmention"
rel="webmention" />
I discovered indieweb yesterday reading about webmention. Who knows what got me to webmention!?
The principles of both are sound. Instead of relying on come-go behemoths to run our digital world, indieweb seeks to provide technologies that enable us to achieve things without them. webmention is a cross-walled-garden mechanism to perform an evolved form of pingbacks. If X references one of Y’s posts, X’s server notifies Y’s server during publication through a webmention service and, Y can then decide to add reference counts of copies of the referring link to their page. Clever.
Hugo and Google Cloud Storage
I’m using Hugo as a static site generator for this blog. I’m using Firebase (for free) to host lefsilver.
I have other domains that I want to promote and decided to use Google Cloud Storage buckets for these sites. Using Google Cloud Storage for Hosting a static website and using Hugo to deploy sites to Google Cloud Storage (GCS) are documented but, I didn’t find a location where this is combined into a single tutorial and I wanted to add an explanation for ensuring your sites are included in Google’s and Bing’s search indexes.
Actions SDK Conversational Quickstart
Google’s tutorial didn’t work for me.
In this post, I’ll help you get this working.
https://developers.google.com/assistant/conversational/quickstart
Create and set up a project
This mostly works.
I recommend using the Actions Console as described to create the project.
I chose “Custom” and “Blank Project”
You need not enable Actions API as this is done automatically:
For the console work, I’m going to use Google’s excellent Cloud Shell. You may access this through the browser or through a terminal:
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
Rust Crate Transparency && Rust SDK for Google Trillian
I’m noodling the utility of a Transparency solution for Rust Crates. When developers push crates to Cargo, a bunch of metadata is associated with the crate. E.g. protobuf
. As with Golang Modules, Python packages on PyPi etc., there appears to be utility in making tamperproof recordings of these publications. Then, other developers may confirm that a crate pulled from cates.io is highly unlikely to have been changed.
On Linux, Cargo stores downloaded crates under ${HOME}/.crates/registry
. In the case of the latest version (2.12.0
) of protobuf
, on my machine, I have:
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]]
PROJECT=[[YOUR-PROJECT]]
SERVER="us.gcr.io"
If not already:
gcloud projects create {$PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable containerregistry.googleapis.com \
--project=${PROJECT}
Container Registry
IMAGE="busybox" # Or ...
docker pull ${IMAGE}
docker tag \
${IMAGE} \
${SERVER}/${PROJECT}/${IMAGE}
docker push ${SERVER}/${PROJECT}/${IMAGE}
gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE}
Service Account
Create a service account that’s permitted to download (read-only) images from this project’s registry
Cloud Build wishlist: Mountable Golang Modules Proxy
I think it would be valuable if Google were to provide volumes in Cloud Build of package registries (e.g. Go Modules; PyPi; Maven; NPM etc.).
Google provides a mirror of a subset of Docker Hub. This confers several benefits: Google’s imprimatur; speed (latency); bandwidth; and convenience.
The same benefits would apply to package registries.
In the meantime, there’s a hacky way to gain some of the benefits of these when using Cloud Build.
In the following example, I’ll show an approach using Golang Modules and Google’s module proxy aka proxy.golang.org
.
Tag: Translation
Google Cloud Translation w/ gRPC 3 ways
General
You’ll need a Google Cloud project with Cloud Translation (translate.googleapis.com
) enabled and a Service Account (and key) with suitable permissions in order to run the following.
BILLING="..." # Your Billing ID (gcloud billing accounts list)
PROJECT="..." # Your Project ID
ACCOUNT="tester"
EMAIL="${ACCOUNT}@${PROJECT}.iam.gserviceaccount.com"
ROLES=(
"roles/cloudtranslate.user"
"roles/serviceusage.serviceUsageConsumer"
)
# Create Project
gcloud projects create ${PROJECT}
# Associate Project with your Billing Account
gcloud billing accounts link ${PROJECT} \
--billing-account=${BILLING}
# Enable Cloud Translation
gcloud services enable translate.googleapis.com \
--project=${PROJECT}
# Create Service Account
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Create Service Account Key
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# Update Project IAM permissions
for ROLE in "${ROLES[@]}"
do
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=${ROLE}
done
For the code, you’ll need to install protoc
and preferably have it in your path.
Tag: Eventarc
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
Tag: Firestore
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
The curious cases of the `deleted:serviceaccount`
While testing Firestore export and import yesterday and checking the IAM permissions on a Cloud Storage Bucket, I noticed some Member (member
) values (I think Google refers to these as Principals) were logical but unfamiliar to me:
deleted:serviceAccount:{email}?uid={uid}
I was using gsutil iam get gs://${BUCKET}
because I’d realized (and this is another useful lesson) that, as I’ve been creating daily test projects, I’ve been binding each project’s Firestore Service Account (service-{project-number}@gcp-sa-firestore.iam.gserviceaccount.com
) to a Bucket owned by another Project but I hadn’t been deleting the binding when I deleted the Project.
Automating Scheduled Firestore Exports
For my “thing”, I use Firestore to persist state. I like Firestore a lot and, having been around Google for almost (!) a decade, I much prefer it to Datastore.
Firestore has a managed export|import service and I use this to backup Firestore collections|documents.
I’d been doing backups manually (using gcloud
) and decided today to take the plunge and use Cloud Scheduler for the first time. I’d been reluctant to do this until now because I’d assumed incorrectly that I’d need to write a wrapping service to invoke the export.
Firestore Golang Timestamps & Merging
I’m using Google’s Golang SDK for Firestore. The experience is excellent and I’m quickly becoming a fan of Firestore. However, as a Golang Firestore developer, I’m feeling less loved and some of the concepts in the database were causing me a conundrum.
I’m still not entirely certain that I have Timestamps nailed but… I learned an important lesson on the auto-creation of Timestamps in documents and how to retain these values.
Cloud Firestore Triggers in Golang
I was pleased to discover that Google provides a non-Node.JS mechanism to subscribe to and act upon Firestore triggers, Google Cloud Firestore Triggers. I’ve nothing against Node.JS but, for the project i’m developing, everything else is written in Golang. It’s good to keep it all in one language.
I’m perplexed that Cloud Functions still (!) only supports Go 1.13 (03-Sep-2019). Even Go 1.14 (25-Feb-2020) was released pre-pandemic and we’re now running on 1.16. Come on Google!
Using Golang with the Firestore Emulator
This works great but it wasn’t clearly documented for non-Firebase users. I assume it will work, as well, for any of the client libraries (not just Golang).
Assuming you have some (Golang) code (in this case using the Google Cloud Client Library) that interacts with a Firestore database. Something of the form:
package main
import (
"context"
"crypto/sha256"
"fmt"
"log"
"os"
"time"
"cloud.google.com/go/firestore"
)
func hash(s string) string {
h := sha256.New()
h.Write([]byte(s))
return fmt.Sprintf("%x", h.Sum(nil))
}
type Dog struct {
Name string `firestore:"name"`
Age int `firestore:"age"`
Human *firestore.DocumentRef `firestore:"human"`
Created time.Time `firestore:"created"`
}
func NewDog(name string, age int, human *firestore.DocumentRef) Dog {
return Dog{
Name: name,
Age: age,
Human: human,
Created: time.Now(),
}
}
func (d *Dog) ID() string {
return hash(d.Name)
}
type Human struct {
Name string `firestore:"name"`
}
func (h *Human) ID() string {
return hash(h.Name)
}
func main() {
ctx := context.Background()
project := os.Getenv("PROJECT")
client, err := firestore.NewClient(ctx, project)
if value := os.Getenv("FIRESTORE_EMULATOR_HOST"); value != "" {
log.Printf("Using Firestore Emulator: %s", value)
}
if err != nil {
log.Fatal(err)
}
defer client.Close()
me := Human{
Name: "me",
}
meDocRef := client.Collection("humans").Doc(me.ID())
if _, err := meDocRef.Set(ctx, me); err != nil {
log.Fatal(err)
}
freddie := NewDog("Freddie", 2, meDocRef)
freddieDocRef := client.Collection("dogs").Doc(freddie.ID())
if _, err := freddieDocRef.Set(ctx, freddie); err != nil {
log.Fatal(err)
}
}
Then you can interact instead with the Firestore Emulator.
Tag: Prost
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Tag: Protobufs
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Python Protobuf changes
Python’s Protocol Buffers code-generation using protoc
has had significant changes that can cause developers… “challenges”. This post summarizes my experience of these mostly to save me from repreatedly recreating this history for myself when I forget it.
- Version change
- Generated code change
- Implementation Backends
I’ll use this summarized table of proto
and the Pypi library’s history in this post. protoc
refers to the compiler that supports code-generation in multiple languages. protobuf
refers to the corresponding Python (runtime) library on Pypi:
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
Remotely invoking WASM functions using gRPC and waPC
Following on from waPC & Protobufs, I can now remotely invoke (arbitrary) WASM functions:
Client:
The logging isn’t perfectly clear but, the client gets (a previously added) WASM binary from the server (using the SHA-256 of the WASM binary as a unique identifier). The result includes metadata that includes a protobuf descriptor of the WASM binary’s functions. The descriptor defines gRPC services (that represent the WASM functions) with input (parameters) and output (results) messages.
Golang Protobuf APIv2
Google has a new Golang Protobuf API, APIv2 (google.golang.org/protobuf) superseding APIv1 (github.com/golang/protobuf). If your code is importing github.com/golang/protobuf
, you’re using APIv2. Otherwise, you should consult with the docs because Google reimplemented APIv1 atop APIv2. One challenge this caused me, as someone who does not use protobufs and gRPC on a daily basis, is that gRPC (code-generation) is being removed from the (Golang) protoc-gen-go
, the Golang plugin that generates gRPC service bindings.
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
waPC & Protobufs
I’m hacking around with a solution that combines WASM and Google Trillian.
Ultimately, I’d like to be able to ship WASM (binaries) to a Trillian personality and then invoke (exported) functions on them. Some this was borne from the interesting exploration of Krustlet and its application of wascc.
I’m still booting into WASM but it’s a very interesting technology that has most interesting potential outside the browser. Some folks have been trailblazing the technology and I have been reading Kevin Hoffman’s medium and wascc (nee waxosuit) work. From this, I stumbled upon Kevin’s waPC and I’m using waPC in this prototyping as a way to exchange data between clients and servers running WASM binaries.
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Golang gRPC Cloud Run
Update: 2020-03-24: Since writing this post, I’ve contributed Golang and Rust samples to Google’s project. I recommend you start there.
Google explained how to run gRPC servers with Cloud Run. The examples are good but only Python and Node.JS:
Missing Golang…. until now ;-)
I had problems with 1.14 and so I’m using 1.13.
Project structure
I’ll tidy up my repo but the code may be found:
Google's New Golang SDK for Protobufs
Google has released a new Golang SDK for protobuf. In the [announcement], a useful tool to redact properties is described. If like me, this is somewhat novel to you, here’s a mashup using Google’s Protocol Buffer Basics w/ redaction.
To be very clear, as it’s an important distinction:
Version | Repo | Docs |
---|---|---|
v2 | google.golang.org/protobuf | Docs |
v1 | github.com/golang/protobuf | Docs |
Project
Here’s my project structure:
.
├── protoc-3.11.4-linux-x86_64
│ ├── bin
│ │ └── protoc
│ ├── include
│ │ └── google
│ └── readme.txt
└── src
├── go.mod
├── go.sum
├── main.go
├── protos
│ ├── addressbook.pb.go
│ └── addressbook.proto
└── README.md
You may structure as you wish.
Tag: Tonic
Google Cloud Events protobufs and SDKs
I’ve written before about Ackal’s use of Firestore and subscribing to Firestore document CRUD events:
- Routing Firestore events to GKE with Eventarc
- Cloud Firestore Triggers in Golang using Firestore triggers
I find Google’s Eventarc documentation to be confusing and, in typical Google fashion, even though open-sourced, you often need to do some legwork to find relevant sources, viz:
- Google’s Protobufs for Eventarc (using cloudevents)
google-cloudevents
1 - Convenience (since you can generate these using
protoc
) language-specific types generated from the above e.g.google-cloudevents-go
;google-cloudevents-python
etc.
1 – IIUC EventArc is the Google service. It carries Google Events that are CloudEvents. These are defined by protocol buffers schemas.
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Tag: JSON
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Tag: Protoc
Prost! Tonic w/ a dash of JSON
I naively (!) began exploring JSON marshaling of Protobufs in rust. Other protobuf language SDKs include JSON marshaling making the process straightforward. I was to learn that, in rust, it’s not so simple. Unfortunately, for me, this continues to discourage my further use of rust (rust is just hard).
My goal was to marshal an arbitrary protocol buffer message that included a oneof
feature. I was unable to JSON marshal the rust generated by tonic
for such a message.
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Python Protobuf changes
Python’s Protocol Buffers code-generation using protoc
has had significant changes that can cause developers… “challenges”. This post summarizes my experience of these mostly to save me from repreatedly recreating this history for myself when I forget it.
- Version change
- Generated code change
- Implementation Backends
I’ll use this summarized table of proto
and the Pypi library’s history in this post. protoc
refers to the compiler that supports code-generation in multiple languages. protobuf
refers to the corresponding Python (runtime) library on Pypi:
Google's New Golang SDK for Protobufs
Google has released a new Golang SDK for protobuf. In the [announcement], a useful tool to redact properties is described. If like me, this is somewhat novel to you, here’s a mashup using Google’s Protocol Buffer Basics w/ redaction.
To be very clear, as it’s an important distinction:
Version | Repo | Docs |
---|---|---|
v2 | google.golang.org/protobuf | Docs |
v1 | github.com/golang/protobuf | Docs |
Project
Here’s my project structure:
.
├── protoc-3.11.4-linux-x86_64
│ ├── bin
│ │ └── protoc
│ ├── include
│ │ └── google
│ └── readme.txt
└── src
├── go.mod
├── go.sum
├── main.go
├── protos
│ ├── addressbook.pb.go
│ └── addressbook.proto
└── README.md
You may structure as you wish.
Tag: Fks
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull
-tag
-push
images to registry.fly.io
but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):
Tag: Fly.io
Fly Kubernetes
Interested to explore Fly Kubernetes after being accepted into the closed beta.
The folks at Fly are innovative in their technology uses and, having been a long-time Kubernetes user, I was intrigued to learn that Fly.io has implemented Kubernetes atop Fly.
My first Deployment failed:
Authentication required to access image "ghcr.io/{image}"
It was confirmed to me that FKS does not support pulling from private registries. The solution is pull
-tag
-push
images to registry.fly.io
but, Fly’s repository is app-specific and so, you need to do some querying to grab the Fly app created by FKS (for your namespace):
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Fly.io
I spent some time over the weekend understanding Fly.io. It’s always fascinating to me how many smart people are building really neat solutions. Fly.io is subtly different to other platforms that I use (Kubernetes, GCP, DO, Linode) and I’ve found the Fly.io team to be highly responsive and helpful to my noob questions.
One of the team’s posts, Docker without Docker surfaced in my Feedly feed (hackernews) and it piqued my interest.
Tag: Prom2json
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Tag: Prometheus
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Capturing e.g. CronJob metrics with GMP
The deployment of Kube State Metrics for Google Managed Prometheus creates both a PodMonitoring
and ClusterPodMonitoring
.
The PodMonitoring
resource exposes metrics published on metric-self
port (8081).
The ClusterPodMonitoring
exposes metrics published on metric
port (8080) but this doesn’t include cronjob
-related metrics:
kubectl get clusterpodmonitoring/kube-state-metrics \
--output=jsonpath="{.spec.endpoints[0].metricRelabeling}" \
| jq -r .
[
{
"action": "keep",
"regex": "kube_(daemonset|deployment|replicaset|pod|namespace|node|statefulset|persistentvolume|horizontalpodautoscaler|job_created)(_.+)?",
"sourceLabels": [
"__name__"
]
}
]
NOTE The
regex
does not includekube_cronjob
and only includeskube_job_created
patterns.
Prometheus Operator support an auth proxy for Service Discovery
CRD linting
Returning to yesterday’s failing tests, it’s unclear how to introspect the E2E tests.
kubectl get namespaces
NAME STATUS AGE
...
allns-s2os2u-0-90f56669 Active 22h
allns-s2qhuw-0-6b33d5eb Active 4m23s
kubectl get all \
--namespace=allns-s2os2u-0-90f56669
No resources found in allns-s2os2u-0-90f56669 namespace.
kubectl get all \
--namespace=allns-s2qhuw-0-6b33d5eb
NAME READY STATUS RESTARTS AGE
pod/prometheus-operator-6c96477b9c-q6qm2 1/1 Running 0 4m12s
pod/prometheus-operator-admission-webhook-68bc9f885-nq6r8 0/1 ImagePullBackOff 0 4m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP 10.152.183.247 <none> 443/TCP 4m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 4m12s
deployment.apps/prometheus-operator-admission-webhook 0/1 1 0 4m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-6c96477b9c 1 1 1 4m13s
replicaset.apps/prometheus-operator-admission-webhook-68bc9f885 1 1 0 4m8s
kubectl logs deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qhuw-0-6b33d5eb
Error from server (BadRequest): container "prometheus-operator-admission-webhook" in pod "prometheus-operator-admission-webhook-68bc9f885-nq6r8" is waiting to start: trying and failing to pull image
NAME="prometheus-operator-admission-webhook"
FILTER="{.spec.template.spec.containers[?(@.name==\"${NAME}\")].image}"
kubectl get deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qjz2-0-fad82c03 \
--output=jsonpath="${FILTER}"
quay.io/prometheus-operator/admission-webhook:52d1e55af
Want:
Prometheus Operator support an auth proxy for Service Discovery
For ackalctld
to be deployable to Kubernetes with Prometheus Operator, it is necessary to Enable ScrapeConfig
to use (discovery|target) proxies #5966. While I’m familiar with Kubernetes, Kubernetes operators (Ackal uses one built with the Operator SDK) and Prometheus Operator, I’m unfamiliar with developing Prometheus Operator. This (and subsequent) posts will document some preliminary work on this.
Cloned Prometheus Operator
Branched scrape-config-url-proxy
I’m unsure how to effect these changes and unsure whether documentation exists.
Clearly, I will need to revise the ScrapeConfig
CRD to add the proxy_url
fields (one proxy_url
defines a proxy for the Service Discovery endpoint; the second defines a proxy for the targets themselves) and it would be useful for this to closely mirror the existing Prometheus HTTP Service Discovery use, namely ,http_sd_config>
:
Prometheus Operator `ScrapeConfig`
TL;DR Enable ScrapeConfig
to use (discovery|target) proxies
I’ve developed a companion, local daemon (called ackalctld
) for Ackal that provides a functionally close version of the service.
One way to deploy ackalctld
is to use Kubernetes and it would be convenient if the Prometheus metrics were scrapeable by Prometheus Operator.
In order for this to work, Prometheus Operator needs to be able to scrape Google Cloud Run targets because ackalctld
creates Cloud Run services for its health check clients.
Prometheus Exporter for Koyeb
Yet another cloud platform exporter for resource|cost management. This time for Koyeb with Koyeb Exporter.
Deploying resources to cloud platforms generally incurs cost based on the number of resources deployed, the time each resource is deployed and the cost (per period of time) that the resource is deployed. It is useful to be able to automatically measure and alert on all the resources deployed on all the platforms that you’re using and this is an intent of these exporters.
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Prometheus Exporter for Azure (Container Apps)
I’ve written Prometheus Exporters for various cloud platforms. My motivation for writing these Exporters is that I want a unified mechanism to track my usage of these platform’s services. It’s easy to deploy a service on a platform and inadvertently leave it running (up a bill). The set of exporters is:
- Prometheus Exporter for Azure
- Prometheus Exporter for Fly.io
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Vultr
This post describes the recently-added Azure Exporter which only provides metrics for Container Apps and Resource Groups.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Filtering metrics w/ Google Managed Prometheus
Google has published two, very good blog posts on cost management:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
This post is about my application cost reductions for Cloud Monitoring for Ackal.
I’m pleased with Google Cloud Managed Service for Prometheus (hereinafter GMP). I’ve a strong preference for letting service providers run components of Ackal that I consider important but non-differentiating.
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
Authenticate PromLens to Google Managed Prometheus
I’m using Google Managed Service for Prometheus (GMP) and liking it.
Sometime ago, I tried using PromLens with GMP but GMP’s Prometheus HTTP API endpoint requires auth and I’ve battled Prometheus’ somewhat limited auth mechanism before (Scraping metrics exposed by Google Cloud Run services that require authentication).
Listening to PromCon EU 2022 videos, I learned that PromLens has been open sourced and contributed to the Prometheus project. Eventually, the functionality of PromLens should be combined into the Prometheus UI.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Prometheus HTTP Service Discovery of Cloud Run services
Some time ago, I wrote about using Prometheus Service Discovery w/ Consul for Cloud Run and also Scraping metrics exposed by Google Cloud Run services that require authentication. Both solutions remain viable but they didn’t address another use case for Prometheus and Cloud Run services that I have with a “thing” that I’ve been building.
In this scenario, I want to:
- Configure Prometheus to scrape Cloud Run service metrics
- Discover Cloud Run services dynamically
- Authenticate to Cloud Run using Firebase Auth ID tokens
These requirements and – one other – present several challenges:
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Consul discovers Google Cloud Run
I’ve written a basic discoverer
of Google Cloud Run services. This is for a project and it extends work done in some previous posts to Multiplex gRPC and Prometheus with Cloud Run and to use Consul for Prometheus service discovery.
This solution:
- Accepts a set of Google Cloud Platform (GCP) projects
- Trawls them for Cloud Run services
- Assumes that the services expose Prometheus metrics on
:443/metrics
- Relabels the services
- Surfaces any discovered Cloud Run services’ metrics in Prometheus
You’ll need Docker and Docker Compose.
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Prometheus Service Discovery w/ Consul for Cloud Run
I’m working on a project that will programmatically create Google Cloud Run services and I want to be able to dynamically discover these services using Prometheus.
This is one solution.
NOTE Google Cloud Run is the service I’m using, but the principle described herein applies to any runtime service that you’d wish to use.
Why is this challenging? IIUC, it’s primarily because Prometheus has a limited set of plugins for service discovery, see the sections that include _sd_
in Prometheus Configuration documentation. Unfortunately, Cloud Run is not explicitly supported. The alternative appears to be to use file-based discovery but this seems ‘challenging’; it requires, for example, reloading Prometheus on file changes.
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Deploying a Rust HTTP server to DigitalOcean App Platform
DigitalOcean launched an App Platform with many Supported Languages and Frameworks. I used Golang first, then wondered how to use non-natively-supported languages, i.e. Rust.
The good news is that Docker is a supported framework and so, you can run pretty much anything.
Repo: https://github.com/DazWilkin/do-apps-rust
Rust
I’m a Rust noob. I’m always receptive to feedback on improvements to the code. I looked to mirror the Golang example. I’m using rocket and rocket-prometheus for the first time:
You will want to install rust nightly (as Rocket has a dependency that requires it) and then you can override the default toolchain for the current project using:
Google Home Exporter
I’m obsessing over Prometheus exporters. First came Linode Exporter, then GCP Exporter and, on Sunday, I stumbled upon a reverse-engineered API for Google Home devices and so wrote a very basic Google Home SDK and a similarly basic Google Home Exporter:
The SDK only implements /setup/eureka_info
and then only some of the returned properties. There’s not a lot of metric-like data to use besides SignalLevel
(signal_level
) and NoiseLevel
(noise_level
). I’m not clear on the meaning of some of the properties.
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Prometheus AlertManager
Yesterday I discussed a Linode Prometheus Exporter and tantalized use of Prometheus AlertManager.
Success:
Configure
The process is straightforward although I found the Prometheus (config) documentation slightly unwieldy to navigate :-(
The overall process is documented.
Here are the steps I took:
Configure Prometheus
Added the following to prometheus.yml
:
rule_files:
- "/etc/alertmanager/rules/linode.yml"
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager:9093"
Rules must be defined in separate rules files. See below for the content for linode.yml
and an explanation.
Linode Prometheus Exporter
I enjoy using Prometheus and have toyed around with it for some time particularly in combination with Kubernetes. I signed up with Linode [referral] compelled by the addition of a managed Kubernetes service called Linode Kubernetes Engine (LKE). I have an anxiety that I’ll inadvertently leave resources running (unused) on a cloud platform. Instead of refreshing the relevant billing page, it struck me that Prometheus may (not yet proven) help.
The hypothesis is that a combination of a cloud-specific Prometheus exporter reporting aggregate uses of e.g. Linodes (instances), NodeBalancers, Kubernetes clusters etc., could form the basis of an alert mechanism using Prometheus’ alerting.
Tag: Protobuf Format
Prometheus Protobufs and Native Histograms
I responded to a question Prometheus metric protocol buffer in gRPC on Stackoverflow and it piqued my curiosity and got me yak shaving.
Prometheus used to support two exposition formats including Protocol Buffers, then dropped Protocol Buffer and has since re-added it (see Protobuf format). The Protobuf format has returned to support the experimental Native Histograms feature.
I’m interested in adding Native Histogram support to Ackal so thought I’d learn more about this metric.
Tag: Kube-Prometheus-Stack
MicroK8s operability add-on
Spent time today yak-shaving which resulted in an unplanned migration from MicroK8s ‘prometheus’ add-on to the new and not fully-documented ‘observability’ add-on:
sudo microk8s.enable prometheus
Infer repository core for addon prometheus
DEPRECATION WARNING: 'prometheus' is deprecated and will soon be removed. Please use 'observability' instead.
...
The reason for the name change is unclear.
It’s unclear whether there’s a difference in the primary components that are installed too (I’d thought Grafana wasn’t included in ‘prometheus’), (Grafana) Loki and (Grafana) Tempo definitely weren’t included and I don’t want them either.
Tag: Koyeb
Navigating Koyeb's API with Rust
I wrote about Navigating Koyeb’s Golang SDK. That client is generated using the OpenAPI Generator project using Koyeb’s Swagger (now OpenAPI) REST API spec.
This post shows how to generate a Rust SDK using the Generator and provides a very basic example of using the SDK.
The Generator will create a Rust library project:
VERS="v7.2.0"
PACKAGE_NAME="koyeb-api-client-rs"
PACKAGE_VERS="1.0.0"
podman run \
--interactive --tty --rm \
--volume=${PWD}:/local \
docker.io/openapitools/openapi-generator-cli:${VERS} \
generate \
-g=rust \
-i=https://developer.koyeb.com/public.swagger.json \
-o=/local/${PACKAGE_NAME} \
--additional-properties=\
packageName=${PACKAGE_NAME},\
packageVersion=${PACKAGE_VERS}
This will create the project in ${PWD}/${PACKAGE_NAME}
including the documentation at:
Navigating Koyeb's Golang SDK
Ackal deploys gRPC Health Checking clients in locations around the World in order to health check services that are representative of customer need.
Koyeb offers multiple locations and I spent time today writing a client for Ackal to integrate with Koyeb using the Golang client for the Koyeb API.
The SDK is generated from Koyeb’s OpenAPI (nee Swagger) endpoint using openapi-generator-cli
. This is a smart, programmatic solution to ensuring that the SDK always matches the API definition but I found the result is idiosyncratic and therefore a little gnarly.
Prometheus Exporter for Koyeb
Yet another cloud platform exporter for resource|cost management. This time for Koyeb with Koyeb Exporter.
Deploying resources to cloud platforms generally incurs cost based on the number of resources deployed, the time each resource is deployed and the cost (per period of time) that the resource is deployed. It is useful to be able to automatically measure and alert on all the resources deployed on all the platforms that you’re using and this is an intent of these exporters.
Tag: Openapi
Navigating Koyeb's API with Rust
I wrote about Navigating Koyeb’s Golang SDK. That client is generated using the OpenAPI Generator project using Koyeb’s Swagger (now OpenAPI) REST API spec.
This post shows how to generate a Rust SDK using the Generator and provides a very basic example of using the SDK.
The Generator will create a Rust library project:
VERS="v7.2.0"
PACKAGE_NAME="koyeb-api-client-rs"
PACKAGE_VERS="1.0.0"
podman run \
--interactive --tty --rm \
--volume=${PWD}:/local \
docker.io/openapitools/openapi-generator-cli:${VERS} \
generate \
-g=rust \
-i=https://developer.koyeb.com/public.swagger.json \
-o=/local/${PACKAGE_NAME} \
--additional-properties=\
packageName=${PACKAGE_NAME},\
packageVersion=${PACKAGE_VERS}
This will create the project in ${PWD}/${PACKAGE_NAME}
including the documentation at:
Cloud Endpoints combine OpenAPI and gRPC... or not!
See:
- Multiplexing gRPC and HTTP endpoints with Cloud Run
- gRPC, Cloud Run & Endpoints
- ESPv2: Configure Cloud Endpoints to proxy traffic to a Cloud Run multiplexed (gRPC|HTTP) service
Challenges:
- Cloud Run permits single port
- Cloud Run services publishing e.g. gRPC and Prometheus, must multiplex transports
- Cloud Run services publishing multiplexed transports are challenging to expose using Cloud Endpoints
Hypothesis #1: Multiplexed transports work with Cloud Run
Tag: Swagger
Navigating Koyeb's API with Rust
I wrote about Navigating Koyeb’s Golang SDK. That client is generated using the OpenAPI Generator project using Koyeb’s Swagger (now OpenAPI) REST API spec.
This post shows how to generate a Rust SDK using the Generator and provides a very basic example of using the SDK.
The Generator will create a Rust library project:
VERS="v7.2.0"
PACKAGE_NAME="koyeb-api-client-rs"
PACKAGE_VERS="1.0.0"
podman run \
--interactive --tty --rm \
--volume=${PWD}:/local \
docker.io/openapitools/openapi-generator-cli:${VERS} \
generate \
-g=rust \
-i=https://developer.koyeb.com/public.swagger.json \
-o=/local/${PACKAGE_NAME} \
--additional-properties=\
packageName=${PACKAGE_NAME},\
packageVersion=${PACKAGE_VERS}
This will create the project in ${PWD}/${PACKAGE_NAME}
including the documentation at:
OriginStamp Rust SDK Example
I wrote recently describing Python and Golang clients for OriginStamp based on OriginStamp’s API’s swagger spec. As a way to pursue learning rust, I’ve been forcing myself to write examples using rust. I’m honestly finding learning rust tough going and think I’m probably better to revert to the “Learning Rust” tutorials.
That said, herewith an explanation of building a rust client using an OpenAPI (!) generated SDK from OriginStamp’s swagger spec.
OriginStamp Python|Golang SDK Examples
A friend mentioned OriginStamp to me.
NB There are 2 sites: originstamp.com and originstamp.org.
It’s an interesting project.
It’s a solution for providing auditable proof that you had a(ccess to) some digital thing before a certain date. OriginStamp provides user-|developer-friendly means to submit files|hashes (of your content) and have these bundled into transactions that are submitted to e.g. bitcoin.
I won’t attempt to duplicate the narrative here, review OriginStamp’s site and some of its content.
Tag: Googleapis
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Access Google Services using gRPC
Google publishes interface definitions of Google APIs (services) that support REST and gRPC in a repo called Google APIs. Google’s SDKs uses gRPC to access these services but, how to do this using e.g. gRPCurl?
I wanted to debug Cloud Profiler and its agent makes UpdateProfile
RPCs to cloudprofiler.googleapis.com
. Cloud Profiler is more challenging service to debug because (a) it’s publicly “write-only”; and (b) it has complex messages. UpdateProfile
sends UpdateProfileRequest
messages that include Profile
messages that include profile_bytes
which are gzip compressed serialized protos of pprof’s Profile
.
Tag: Grpc-Proto
Gnarly Protocol Buffers compilation
This Stackoverflow question piqued my interest:
retry policy configuration for grpc not working
Service Config in gRPC is new to me but, my initial suspicion (albeit incorrect) was that the JSON types were incorrect.
I decided to try using the Protocol Buffer source service_config.proto
to verify the JSON.
To do so I needed to compile the source…. it was gnarly.
There are 2 repos used:
The service_config.proto
includes options
for java_package
but no go_package
.
Tag: Google Managed Prometheus
Capturing e.g. CronJob metrics with GMP
The deployment of Kube State Metrics for Google Managed Prometheus creates both a PodMonitoring
and ClusterPodMonitoring
.
The PodMonitoring
resource exposes metrics published on metric-self
port (8081).
The ClusterPodMonitoring
exposes metrics published on metric
port (8080) but this doesn’t include cronjob
-related metrics:
kubectl get clusterpodmonitoring/kube-state-metrics \
--output=jsonpath="{.spec.endpoints[0].metricRelabeling}" \
| jq -r .
[
{
"action": "keep",
"regex": "kube_(daemonset|deployment|replicaset|pod|namespace|node|statefulset|persistentvolume|horizontalpodautoscaler|job_created)(_.+)?",
"sourceLabels": [
"__name__"
]
}
]
NOTE The
regex
does not includekube_cronjob
and only includeskube_job_created
patterns.
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Filtering metrics w/ Google Managed Prometheus
Google has published two, very good blog posts on cost management:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
This post is about my application cost reductions for Cloud Monitoring for Ackal.
I’m pleased with Google Cloud Managed Service for Prometheus (hereinafter GMP). I’ve a strong preference for letting service providers run components of Ackal that I consider important but non-differentiating.
Authenticate PromLens to Google Managed Prometheus
I’m using Google Managed Service for Prometheus (GMP) and liking it.
Sometime ago, I tried using PromLens with GMP but GMP’s Prometheus HTTP API endpoint requires auth and I’ve battled Prometheus’ somewhat limited auth mechanism before (Scraping metrics exposed by Google Cloud Run services that require authentication).
Listening to PromCon EU 2022 videos, I learned that PromLens has been open sourced and contributed to the Prometheus project. Eventually, the functionality of PromLens should be combined into the Prometheus UI.
Tag: Kube State Metrics
Capturing e.g. CronJob metrics with GMP
The deployment of Kube State Metrics for Google Managed Prometheus creates both a PodMonitoring
and ClusterPodMonitoring
.
The PodMonitoring
resource exposes metrics published on metric-self
port (8081).
The ClusterPodMonitoring
exposes metrics published on metric
port (8080) but this doesn’t include cronjob
-related metrics:
kubectl get clusterpodmonitoring/kube-state-metrics \
--output=jsonpath="{.spec.endpoints[0].metricRelabeling}" \
| jq -r .
[
{
"action": "keep",
"regex": "kube_(daemonset|deployment|replicaset|pod|namespace|node|statefulset|persistentvolume|horizontalpodautoscaler|job_created)(_.+)?",
"sourceLabels": [
"__name__"
]
}
]
NOTE The
regex
does not includekube_cronjob
and only includeskube_job_created
patterns.
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Tag: Prometheus Operator
Capturing e.g. CronJob metrics with GMP
The deployment of Kube State Metrics for Google Managed Prometheus creates both a PodMonitoring
and ClusterPodMonitoring
.
The PodMonitoring
resource exposes metrics published on metric-self
port (8081).
The ClusterPodMonitoring
exposes metrics published on metric
port (8080) but this doesn’t include cronjob
-related metrics:
kubectl get clusterpodmonitoring/kube-state-metrics \
--output=jsonpath="{.spec.endpoints[0].metricRelabeling}" \
| jq -r .
[
{
"action": "keep",
"regex": "kube_(daemonset|deployment|replicaset|pod|namespace|node|statefulset|persistentvolume|horizontalpodautoscaler|job_created)(_.+)?",
"sourceLabels": [
"__name__"
]
}
]
NOTE The
regex
does not includekube_cronjob
and only includeskube_job_created
patterns.
Prometheus Operator support an auth proxy for Service Discovery
CRD linting
Returning to yesterday’s failing tests, it’s unclear how to introspect the E2E tests.
kubectl get namespaces
NAME STATUS AGE
...
allns-s2os2u-0-90f56669 Active 22h
allns-s2qhuw-0-6b33d5eb Active 4m23s
kubectl get all \
--namespace=allns-s2os2u-0-90f56669
No resources found in allns-s2os2u-0-90f56669 namespace.
kubectl get all \
--namespace=allns-s2qhuw-0-6b33d5eb
NAME READY STATUS RESTARTS AGE
pod/prometheus-operator-6c96477b9c-q6qm2 1/1 Running 0 4m12s
pod/prometheus-operator-admission-webhook-68bc9f885-nq6r8 0/1 ImagePullBackOff 0 4m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP 10.152.183.247 <none> 443/TCP 4m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 4m12s
deployment.apps/prometheus-operator-admission-webhook 0/1 1 0 4m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-6c96477b9c 1 1 1 4m13s
replicaset.apps/prometheus-operator-admission-webhook-68bc9f885 1 1 0 4m8s
kubectl logs deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qhuw-0-6b33d5eb
Error from server (BadRequest): container "prometheus-operator-admission-webhook" in pod "prometheus-operator-admission-webhook-68bc9f885-nq6r8" is waiting to start: trying and failing to pull image
NAME="prometheus-operator-admission-webhook"
FILTER="{.spec.template.spec.containers[?(@.name==\"${NAME}\")].image}"
kubectl get deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qjz2-0-fad82c03 \
--output=jsonpath="${FILTER}"
quay.io/prometheus-operator/admission-webhook:52d1e55af
Want:
Prometheus Operator support an auth proxy for Service Discovery
For ackalctld
to be deployable to Kubernetes with Prometheus Operator, it is necessary to Enable ScrapeConfig
to use (discovery|target) proxies #5966. While I’m familiar with Kubernetes, Kubernetes operators (Ackal uses one built with the Operator SDK) and Prometheus Operator, I’m unfamiliar with developing Prometheus Operator. This (and subsequent) posts will document some preliminary work on this.
Cloned Prometheus Operator
Branched scrape-config-url-proxy
I’m unsure how to effect these changes and unsure whether documentation exists.
Clearly, I will need to revise the ScrapeConfig
CRD to add the proxy_url
fields (one proxy_url
defines a proxy for the Service Discovery endpoint; the second defines a proxy for the targets themselves) and it would be useful for this to closely mirror the existing Prometheus HTTP Service Discovery use, namely ,http_sd_config>
:
Prometheus Operator `ScrapeConfig`
TL;DR Enable ScrapeConfig
to use (discovery|target) proxies
I’ve developed a companion, local daemon (called ackalctld
) for Ackal that provides a functionally close version of the service.
One way to deploy ackalctld
is to use Kubernetes and it would be convenient if the Prometheus metrics were scrapeable by Prometheus Operator.
In order for this to work, Prometheus Operator needs to be able to scrape Google Cloud Run targets because ackalctld
creates Cloud Run services for its health check clients.
Tag: Cloud-Logging
Listing Cloud Logging log-based metrics using gRPC
Referring to Accessing Google Services using gRPC, I wanted to query a project’s Cloud Logging for log-based metrics using gRPC.
In summary:
ENDPOINT="logging.googleapis.com:443"
ROOT="/path/to/googleapis" # https://github.com/googleapis/googleapis
PACKAGE="google/logging/v2"
# NB Not logging.proto
PROTO="${ROOT}/${PACKAGE}/logging_metrics.proto"
TOKEN=$(gcloud auth print-access-token)
PROJECT="..."
PACKAGE="google.logging.v2"
SERVICE="MetricsServiceV2"
METHOD="${PACKAGE}.${SERVICE}/ListLogMetrics"
# ListLogMetricsRequest fields
PARENT="projects/${PROJECT}"
grpcurl \
--import-path=${ROOT} \
--proto=${PROTO} \
-H "Authorization: Bearer ${TOKEN}" \
-d "{\"parent\": \"${PARENT}\"}" \
${ENDPOINT} ${METHOD}
From APIs Explorer, Cloud Logging API v2, instead of the REST reference, browse the gRPC reference specifically the package google.logging.v2
which includes MetricsServiceV2
. We’re interested in the ListLogMetrics
method (which unfortunately isn’t directly hyperlinkable) but is defined to be:
Golang Structured Logging w/ Google Cloud Logging (2)
UPDATE There’s an issue with my naive implementation of
RenderValuesHook
as described in this post. I summarized the problem in this issue where I’ve outlined (hopefully) a more robust solution.
Recently, I described how to configure Golang logging so that user-defined key-values applied to the logs are parsed when ingested by Google Cloud Logging.
Here’s an example of what we’re trying to achieve. This is an example Cloud Logging log entry that incorporates user-defined labels (see dog:freddie
and foo:bar
) and a readily-querable jsonPayload
:
Golang Structured Logging w/ Google Cloud Logging
I’ve multiple components in an app and these are deployed across multiple Google Cloud Platform (GCP) services: Kubernetes Engine, Cloud Functions, Cloud Run, etc. Almost everything is written in Golang and I started the project using go-logr
.
logr
is in two parts: a Logger
that you use to write log entries; a LogSink
(adaptor) that consumes log entries and outputs them to a specific log implementation.
Initially, I defaulted to using stdr
which is a LogSink
for Go’s standard logging implementation. Something similar to the module’s example:
Tag: Scrape-Config-Url-Proxy
Prometheus Operator support an auth proxy for Service Discovery
CRD linting
Returning to yesterday’s failing tests, it’s unclear how to introspect the E2E tests.
kubectl get namespaces
NAME STATUS AGE
...
allns-s2os2u-0-90f56669 Active 22h
allns-s2qhuw-0-6b33d5eb Active 4m23s
kubectl get all \
--namespace=allns-s2os2u-0-90f56669
No resources found in allns-s2os2u-0-90f56669 namespace.
kubectl get all \
--namespace=allns-s2qhuw-0-6b33d5eb
NAME READY STATUS RESTARTS AGE
pod/prometheus-operator-6c96477b9c-q6qm2 1/1 Running 0 4m12s
pod/prometheus-operator-admission-webhook-68bc9f885-nq6r8 0/1 ImagePullBackOff 0 4m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP 10.152.183.247 <none> 443/TCP 4m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 4m12s
deployment.apps/prometheus-operator-admission-webhook 0/1 1 0 4m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-6c96477b9c 1 1 1 4m13s
replicaset.apps/prometheus-operator-admission-webhook-68bc9f885 1 1 0 4m8s
kubectl logs deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qhuw-0-6b33d5eb
Error from server (BadRequest): container "prometheus-operator-admission-webhook" in pod "prometheus-operator-admission-webhook-68bc9f885-nq6r8" is waiting to start: trying and failing to pull image
NAME="prometheus-operator-admission-webhook"
FILTER="{.spec.template.spec.containers[?(@.name==\"${NAME}\")].image}"
kubectl get deployment/prometheus-operator-admission-webhook \
--namespace=allns-s2qjz2-0-fad82c03 \
--output=jsonpath="${FILTER}"
quay.io/prometheus-operator/admission-webhook:52d1e55af
Want:
Prometheus Operator support an auth proxy for Service Discovery
For ackalctld
to be deployable to Kubernetes with Prometheus Operator, it is necessary to Enable ScrapeConfig
to use (discovery|target) proxies #5966. While I’m familiar with Kubernetes, Kubernetes operators (Ackal uses one built with the Operator SDK) and Prometheus Operator, I’m unfamiliar with developing Prometheus Operator. This (and subsequent) posts will document some preliminary work on this.
Cloned Prometheus Operator
Branched scrape-config-url-proxy
I’m unsure how to effect these changes and unsure whether documentation exists.
Clearly, I will need to revise the ScrapeConfig
CRD to add the proxy_url
fields (one proxy_url
defines a proxy for the Service Discovery endpoint; the second defines a proxy for the targets themselves) and it would be useful for this to closely mirror the existing Prometheus HTTP Service Discovery use, namely ,http_sd_config>
:
Tag: Exporter
Prometheus Exporter for Koyeb
Yet another cloud platform exporter for resource|cost management. This time for Koyeb with Koyeb Exporter.
Deploying resources to cloud platforms generally incurs cost based on the number of resources deployed, the time each resource is deployed and the cost (per period of time) that the resource is deployed. It is useful to be able to automatically measure and alert on all the resources deployed on all the platforms that you’re using and this is an intent of these exporters.
Prometheus Exporter for Azure (Container Apps)
I’ve written Prometheus Exporters for various cloud platforms. My motivation for writing these Exporters is that I want a unified mechanism to track my usage of these platform’s services. It’s easy to deploy a service on a platform and inadvertently leave it running (up a bill). The set of exporters is:
- Prometheus Exporter for Azure
- Prometheus Exporter for Fly.io
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Vultr
This post describes the recently-added Azure Exporter which only provides metrics for Container Apps and Resource Groups.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Google Home Exporter
I’m obsessing over Prometheus exporters. First came Linode Exporter, then GCP Exporter and, on Sunday, I stumbled upon a reverse-engineered API for Google Home devices and so wrote a very basic Google Home SDK and a similarly basic Google Home Exporter:
The SDK only implements /setup/eureka_info
and then only some of the returned properties. There’s not a lot of metric-like data to use besides SignalLevel
(signal_level
) and NoiseLevel
(noise_level
). I’m not clear on the meaning of some of the properties.
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Linode Prometheus Exporter
I enjoy using Prometheus and have toyed around with it for some time particularly in combination with Kubernetes. I signed up with Linode [referral] compelled by the addition of a managed Kubernetes service called Linode Kubernetes Engine (LKE). I have an anxiety that I’ll inadvertently leave resources running (unused) on a cloud platform. Instead of refreshing the relevant billing page, it struck me that Prometheus may (not yet proven) help.
The hypothesis is that a combination of a cloud-specific Prometheus exporter reporting aggregate uses of e.g. Linodes (instances), NodeBalancers, Kubernetes clusters etc., could form the basis of an alert mechanism using Prometheus’ alerting.
Tag: Ackal
Kubernetes Python SDK w/ CRDs
Responded to Get Custom K8s Resource using Python and found the CustomObjectsApi
documentation unclear.
If you have a cluster and a kubeconfig file with a correctly configured current-context
, so that you can successfully:
PLURAL="checks"
kubectl get ${PLURAL} \
--all-namespaces
NOTE I’m using Ackal’s CRDs in these examples.
Then you can use the following code to access the cluster’s REST API server to enumerate its CRDs:
main.py
:
from __future__ import print_function
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_kube_config()
api = client.CustomObjectsApi()
# Ackal's Group|Version and some Kinds
group = 'ack.al'
version = 'v1'
plurals = ['checks','customers']
for plural in plurals:
try:
resp = api.list_cluster_custom_object(
group,
version,
plural,
)
for item in resp["items"]:
spec = item["spec"]
print(spec)
except ApiException as e:
print(e)
python3 -m venv venv
source venv/bin/activate
python3 -m pip install kubernetes==26.1.0
python3 main.py
That’s all!
Azure Container Apps
The majority of Ackal’s components are deployed to Google Cloud. However, by its nature, Ackal benefits from deployments that span cloud platforms. I’ve deployed Ackal’s gRPC health checks to Fly, and managed Kubernetes services on Linode and Vultr.
Today, I decided to revisit¹ Azure. Ackal uses Azure (Active Directory) for one of its OAuth providers. This time, I wanted to deploy a containerized gRPC service. Azure provides several container-oriented services. I decided to use Azure Container Apps and, in hindsight, find it analogous to Google Cloud Run.
Tag: Goatcounter
GoatCounter with Hugo with Ananke
Thanks to Joe Mooring for his solution to my question as to how to do this.
I’ve decided to ditch Google Analytics and am evaluating using GoatCounter with Hugo with Ananke theme.
This was the layout of the site:
.
├── archetypes
├── content
│ └── posts
├── go.mod
├── go.sum
├── hugo.toml
├── layouts
├── public
└── static
This is the structure (of layouts
) with the necessary changes:
.
├── archetypes
├── content
│ └── posts
├── go.mod
├── go.sum
├── hugo.toml
├── layouts
│ ├── _default
│ │ └── baseof.html
│ └── partials
│ └── analytics.html
├── public
└── static
I copied /layouts/_default/baseof.html
from the gohugo-ananke-theme
repo into my site’s|repo’s /layouts/_default/baseof.html
.
Tag: Hugo
GoatCounter with Hugo with Ananke
Thanks to Joe Mooring for his solution to my question as to how to do this.
I’ve decided to ditch Google Analytics and am evaluating using GoatCounter with Hugo with Ananke theme.
This was the layout of the site:
.
├── archetypes
├── content
│ └── posts
├── go.mod
├── go.sum
├── hugo.toml
├── layouts
├── public
└── static
This is the structure (of layouts
) with the necessary changes:
.
├── archetypes
├── content
│ └── posts
├── go.mod
├── go.sum
├── hugo.toml
├── layouts
│ ├── _default
│ │ └── baseof.html
│ └── partials
│ └── analytics.html
├── public
└── static
I copied /layouts/_default/baseof.html
from the gohugo-ananke-theme
repo into my site’s|repo’s /layouts/_default/baseof.html
.
Deploying Hugo site to DigitalOcean Apps
I’ve been running a DigitalOcean Apps static site for Hugo using the Hugo Buildpack.
I’ve migrated a set of Hugo sites to use Hugo Modules which includes the addition of go.mod
(and go.sum
) files to the Hugo project in order to manage e.g. themes.
Unfortunately, the Hugo Buildpack used by DigitalOcean Apps does not support Hugo Modules. DigitalOcean support recommended that I switch to use a build with a Dockerfile. Unfortunately (!) the recommended Hugo container image (klakegg/hugo
) is outdated (0.107.0). The current version is 0.111.3
.
Hugo and Google Cloud Storage
I’m using Hugo as a static site generator for this blog. I’m using Firebase (for free) to host lefsilver.
I have other domains that I want to promote and decided to use Google Cloud Storage buckets for these sites. Using Google Cloud Storage for Hosting a static website and using Hugo to deploy sites to Google Cloud Storage (GCS) are documented but, I didn’t find a location where this is combined into a single tutorial and I wanted to add an explanation for ensuring your sites are included in Google’s and Bing’s search indexes.
Tag: GKE
Routing Firestore events to GKE with Eventarc
Google announced Firestore … integration with Eventarc. Ackal uses Firestore to persist Customer and Check information and it uses Google Cloud Firestore Triggers to handle events on these document types.
Eventarc feels like the strategic future of eventing in Google Cloud and I’ve been concerned since adopting the technology that Google would abandon Google Cloud Firestore Triggers.
For this reason, when I saw last week’s announcement, I thought I should evaluate the mechanism and this blog post is a summary of that work.
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
Tag: Apps
Deploying Hugo site to DigitalOcean Apps
I’ve been running a DigitalOcean Apps static site for Hugo using the Hugo Buildpack.
I’ve migrated a set of Hugo sites to use Hugo Modules which includes the addition of go.mod
(and go.sum
) files to the Hugo project in order to manage e.g. themes.
Unfortunately, the Hugo Buildpack used by DigitalOcean Apps does not support Hugo Modules. DigitalOcean support recommended that I switch to use a build with a Dockerfile. Unfortunately (!) the recommended Hugo container image (klakegg/hugo
) is outdated (0.107.0). The current version is 0.111.3
.
Tag: Digitalocean
Deploying Hugo site to DigitalOcean Apps
I’ve been running a DigitalOcean Apps static site for Hugo using the Hugo Buildpack.
I’ve migrated a set of Hugo sites to use Hugo Modules which includes the addition of go.mod
(and go.sum
) files to the Hugo project in order to manage e.g. themes.
Unfortunately, the Hugo Buildpack used by DigitalOcean Apps does not support Hugo Modules. DigitalOcean support recommended that I switch to use a build with a Dockerfile. Unfortunately (!) the recommended Hugo container image (klakegg/hugo
) is outdated (0.107.0). The current version is 0.111.3
.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Deploying a Rust HTTP server to DigitalOcean App Platform
DigitalOcean launched an App Platform with many Supported Languages and Frameworks. I used Golang first, then wondered how to use non-natively-supported languages, i.e. Rust.
The good news is that Docker is a supported framework and so, you can run pretty much anything.
Repo: https://github.com/DazWilkin/do-apps-rust
Rust
I’m a Rust noob. I’m always receptive to feedback on improvements to the code. I looked to mirror the Golang example. I’m using rocket and rocket-prometheus for the first time:
You will want to install rust nightly (as Rocket has a dependency that requires it) and then you can override the default toolchain for the current project using:
Tag: Cloud-Monitoring
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Using Google Monitoring Alerting to send Pushover notifications
Table of Contents
Artifacts
- GitHub:
go-gcp-pushover-notificationchannel
- Image:
ghcr.io/dazwilkin/go-gcp-pushover-notificationchannel:220515
Pushover
Logging in to your Pushover account, you will be presented with a summary|dashboard page that includes Your User Key
. Copy the value of this key into a variable called PUSHOVER_USER
Create New Application|API Token
Pushover API has a Pushing Messages method. The documentation describes the format of the HTTP Request. It must be a POST
using TLS (https://
) to https://api.pushover.net/1/messages.json
. The content-type
should be application/json
. In the JSON body of the message, we must include token
(PUSHOVER_TOKEN
), user
(PUSHOVER_USER_KEY
), device
(we’ll use cloud-monitoring
) and a title
and a message
Tag: Robusta
Robusta KRR w/ GMP
I’ve been spending time recently optimizing Ackal’s use of Google Cloud Logging and Cloud Monitoring in posts:
- Filtering metrics w/ Google Managed Prometheus
- Kubernetes metrics, metrics everywhere
- Google Metric Diagnostics and Metric Data Ingested
Yesterday, I read that Robusta has a new open source project Kubernetes Resource Recommendations (KRR) so I took some time to evaluate it.
This post describes the changes I had to make to get KRR working with Google Managed Prometheus (GMP):
Tag: Metrics
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Tag: Azure
Prometheus Exporter for Azure (Container Apps)
I’ve written Prometheus Exporters for various cloud platforms. My motivation for writing these Exporters is that I want a unified mechanism to track my usage of these platform’s services. It’s easy to deploy a service on a platform and inadvertently leave it running (up a bill). The set of exporters is:
- Prometheus Exporter for Azure
- Prometheus Exporter for Fly.io
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Vultr
This post describes the recently-added Azure Exporter which only provides metrics for Container Apps and Resource Groups.
Azure Container Apps
The majority of Ackal’s components are deployed to Google Cloud. However, by its nature, Ackal benefits from deployments that span cloud platforms. I’ve deployed Ackal’s gRPC health checks to Fly, and managed Kubernetes services on Linode and Vultr.
Today, I decided to revisit¹ Azure. Ackal uses Azure (Active Directory) for one of its OAuth providers. This time, I wanted to deploy a containerized gRPC service. Azure provides several container-oriented services. I decided to use Azure Container Apps and, in hindsight, find it analogous to Google Cloud Run.
Tag: Container Apps
Prometheus Exporter for Azure (Container Apps)
I’ve written Prometheus Exporters for various cloud platforms. My motivation for writing these Exporters is that I want a unified mechanism to track my usage of these platform’s services. It’s easy to deploy a service on a platform and inadvertently leave it running (up a bill). The set of exporters is:
- Prometheus Exporter for Azure
- Prometheus Exporter for Fly.io
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Vultr
This post describes the recently-added Azure Exporter which only provides metrics for Container Apps and Resource Groups.
Azure Container Apps
The majority of Ackal’s components are deployed to Google Cloud. However, by its nature, Ackal benefits from deployments that span cloud platforms. I’ve deployed Ackal’s gRPC health checks to Fly, and managed Kubernetes services on Linode and Vultr.
Today, I decided to revisit¹ Azure. Ackal uses Azure (Active Directory) for one of its OAuth providers. This time, I wanted to deploy a containerized gRPC service. Azure provides several container-oriented services. I decided to use Azure Container Apps and, in hindsight, find it analogous to Google Cloud Run.
Tag: Kubelet
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Tag: Apps Script
Apps Script connecting to GCS
I’m building a Google Sheet that interacts with Google Cloud Storage (GCS) objects using Apps Script.
I Googled but found few examples of such integrations beyond out-of-band solutions (e.g. Python solutions) that interact with Google services and program Google Sheets using its library.
In my case, I’m going to bind a Google Sheet to a specific Google Cloud project and my Google (User) account has owner
access to the Google Cloud Storage bucket and its objects.
Tag: Google Cloud Storage
Apps Script connecting to GCS
I’m building a Google Sheet that interacts with Google Cloud Storage (GCS) objects using Apps Script.
I Googled but found few examples of such integrations beyond out-of-band solutions (e.g. Python solutions) that interact with Google services and program Google Sheets using its library.
In my case, I’m going to bind a Google Sheet to a specific Google Cloud project and my Google (User) account has owner
access to the Google Cloud Storage bucket and its objects.
Tag: Bash
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
Infrastructure as Code
Problem
I’m building an application that comprises:
- Kubernetes¹
- Kubernetes Operator
- Cloud Firestore
- Cloud Functions
- Cloud Run
- Cloud Endpoints
- Stripe
- Firebase Authentication
¹ - I’m using Google Kubernetes Engine (GKE) but may include other managed Kubernetes offerings (e.g. Digital Ocean, Linode, Oracle). GKE clusters are manageable by
gcloud
but other platforms require other CLI tools. All are accessible from bash but are these supported by e.g. Terraform (see below)?
Many of the components are packaged as container images and, because I’m using GitHub to host the project’s repos (I’ll leave the monorepo discussion for another post), I’ve become inculcated and use GitHub Container Registry (GHCR) as the container repo.
Tag: Operators
Kubernetes Operators
Ackal uses a Kubernetes Operator to orchestrate the lifecycle of its health checks. Ackal’s Operator is written in Go using kubebuilder
.
Yesterday, my interest was piqued by a MetalBear blog post Writing a Kubernetes Operator [in Rust]. I spent some time reimplementing one of Ackal’s CRDs (Check
) using kube-rs
and not only refreshed my Rust knowledge but learned a bunch more about Kubernetes and Operators.
While rummaging around the Kubernetes documentation, I discovered flant’s Shell-operator
and spent some time today exploring its potential.
Tag: Linode
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Prometheus AlertManager
Yesterday I discussed a Linode Prometheus Exporter and tantalized use of Prometheus AlertManager.
Success:
Configure
The process is straightforward although I found the Prometheus (config) documentation slightly unwieldy to navigate :-(
The overall process is documented.
Here are the steps I took:
Configure Prometheus
Added the following to prometheus.yml
:
rule_files:
- "/etc/alertmanager/rules/linode.yml"
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager:9093"
Rules must be defined in separate rules files. See below for the content for linode.yml
and an explanation.
Linode Prometheus Exporter
I enjoy using Prometheus and have toyed around with it for some time particularly in combination with Kubernetes. I signed up with Linode [referral] compelled by the addition of a managed Kubernetes service called Linode Kubernetes Engine (LKE). I have an anxiety that I’ll inadvertently leave resources running (unused) on a cloud platform. Instead of refreshing the relevant billing page, it struck me that Prometheus may (not yet proven) help.
The hypothesis is that a combination of a cloud-specific Prometheus exporter reporting aggregate uses of e.g. Linodes (instances), NodeBalancers, Kubernetes clusters etc., could form the basis of an alert mechanism using Prometheus’ alerting.
Tag: Lke
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Tag: Ssl
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Tag: Tls
Secure (TLS) gRPC services with LKE
NOTE
cert-manager
is a better solution to what follows.
I wrote about deploying Secure (TLS) gRPC services with Vultr Kubernetes Engine (VKE). This week, I’ve reproduced this deployment using Linode Kubernetes Engine (LKE).
Thanks to the consistency provided by Kubernetes, the Kubernetes programming is almost identical. The main differences are between the CLI’s provided by these platforms. Both are good. They’re just different.
I’m going to include the linode-cli
commands I’m using in this post as I found it slightly more quirky.
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Tag: GCP
Authenticate PromLens to Google Managed Prometheus
I’m using Google Managed Service for Prometheus (GMP) and liking it.
Sometime ago, I tried using PromLens with GMP but GMP’s Prometheus HTTP API endpoint requires auth and I’ve battled Prometheus’ somewhat limited auth mechanism before (Scraping metrics exposed by Google Cloud Run services that require authentication).
Listening to PromCon EU 2022 videos, I learned that PromLens has been open sourced and contributed to the Prometheus project. Eventually, the functionality of PromLens should be combined into the Prometheus UI.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Using Google Monitoring Alerting to send Pushover notifications
Table of Contents
Artifacts
- GitHub:
go-gcp-pushover-notificationchannel
- Image:
ghcr.io/dazwilkin/go-gcp-pushover-notificationchannel:220515
Pushover
Logging in to your Pushover account, you will be presented with a summary|dashboard page that includes Your User Key
. Copy the value of this key into a variable called PUSHOVER_USER
Create New Application|API Token
Pushover API has a Pushing Messages method. The documentation describes the format of the HTTP Request. It must be a POST
using TLS (https://
) to https://api.pushover.net/1/messages.json
. The content-type
should be application/json
. In the JSON body of the message, we must include token
(PUSHOVER_TOKEN
), user
(PUSHOVER_USER_KEY
), device
(we’ll use cloud-monitoring
) and a title
and a message
Cloud Run custom domain mappings
I have several Cloud Run services that I want to map to a domain.
During development, I create a Google Cloud Platform (GCP) project each day into which everything is deployed. This means that, every day, the Cloud Run services have newly non-inferable (to me) URLs. I thought this would be tedious to manage because:
- My DNS service isn’t programmable (I know!)
- Cloud Run services have non-inferable (by me) URLs
i.e. I thought I’d have to manually update the DNS entries each day.
Automating Scheduled Firestore Exports
For my “thing”, I use Firestore to persist state. I like Firestore a lot and, having been around Google for almost (!) a decade, I much prefer it to Datastore.
Firestore has a managed export|import service and I use this to backup Firestore collections|documents.
I’d been doing backups manually (using gcloud
) and decided today to take the plunge and use Cloud Scheduler for the first time. I’d been reluctant to do this until now because I’d assumed incorrectly that I’d need to write a wrapping service to invoke the export.
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Infrastructure as Code
Problem
I’m building an application that comprises:
- Kubernetes¹
- Kubernetes Operator
- Cloud Firestore
- Cloud Functions
- Cloud Run
- Cloud Endpoints
- Stripe
- Firebase Authentication
¹ - I’m using Google Kubernetes Engine (GKE) but may include other managed Kubernetes offerings (e.g. Digital Ocean, Linode, Oracle). GKE clusters are manageable by
gcloud
but other platforms require other CLI tools. All are accessible from bash but are these supported by e.g. Terraform (see below)?
Many of the components are packaged as container images and, because I’m using GitHub to host the project’s repos (I’ll leave the monorepo discussion for another post), I’ve become inculcated and use GitHub Container Registry (GHCR) as the container repo.
Cloud Firestore Triggers in Golang
I was pleased to discover that Google provides a non-Node.JS mechanism to subscribe to and act upon Firestore triggers, Google Cloud Firestore Triggers. I’ve nothing against Node.JS but, for the project i’m developing, everything else is written in Golang. It’s good to keep it all in one language.
I’m perplexed that Cloud Functions still (!) only supports Go 1.13 (03-Sep-2019). Even Go 1.14 (25-Feb-2020) was released pre-pandemic and we’re now running on 1.16. Come on Google!
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Tag: PromLens
Authenticate PromLens to Google Managed Prometheus
I’m using Google Managed Service for Prometheus (GMP) and liking it.
Sometime ago, I tried using PromLens with GMP but GMP’s Prometheus HTTP API endpoint requires auth and I’ve battled Prometheus’ somewhat limited auth mechanism before (Scraping metrics exposed by Google Cloud Run services that require authentication).
Listening to PromCon EU 2022 videos, I learned that PromLens has been open sourced and contributed to the Prometheus project. Eventually, the functionality of PromLens should be combined into the Prometheus UI.
Tag: Container
Maintaining Container Images
As I contemplate moving my “thing” into production, I’m anticipating aspects of the application that need maintenance and how this can be automated.
I’d been negligent in the maintenance of some of my container images.
I’m using mostly Go and some Rust as the basis of static(ally-compiled) binaries that run in these containers but not every container has a base image of scratch
. scratch
is the only base image that doesn’t change and thus the only base image that doesn’t require that container images buit FROM
it, be maintained.
Playing with GitHub Container Registry REST API
I’ve a day to catch up on blogging. I’m building a “thing” and getting this near to the finish line consumes my time and has meant that I’m not originating anything particularly new. However, there are a couple of tricks in my deployment process that may be of interest to others.
I’ve been a long-term using of Google’s [Cloud Build] and like the simplicity (everything’s a container, alot!). Because I’m using GitHub repos, I’ve been using GitHub Actions to (re)build containers on pushes and GitHub Container registry (GHCR) to store the results. I know that Google provides analogs for GitHub repos and (forces me to use) Artifact Registry (to deploy my Cloud Run services) but even though I dislike GitHub Actions, it’s really easy to do everything in one place.
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Tag: Docker
Maintaining Container Images
As I contemplate moving my “thing” into production, I’m anticipating aspects of the application that need maintenance and how this can be automated.
I’d been negligent in the maintenance of some of my container images.
I’m using mostly Go and some Rust as the basis of static(ally-compiled) binaries that run in these containers but not every container has a base image of scratch
. scratch
is the only base image that doesn’t change and thus the only base image that doesn’t require that container images buit FROM
it, be maintained.
Run cAdvisor when using Docker Compose
cAdvisor
has long been a favorite monitoring tool of mine. I’m using Docker Compose for local testing and have begun including a cAdvisor
in my docker-compose.yaml
files.
cadvisor:
restart: always
image: google/cadvisor:${CADVISOR_VERSION}
container_name: cadvisor
# command:
# - --prometheus_endpoint="/metrics" # Default
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/snap/docker/current:/var/lib/docker:ro" #- "/var/lib/docker/:/var/lib/docker:ro"
expose:
- "8080"
ports:
- 8080:8080
I’d not realized until recently, that cAdvisor
also surfaces a Prometheus metrics endpoint and so, if you do follow this path and you’re also using Prometheus, don’t forget to add cAdvisor
to your Prometheus targets.
Tag: Podman
Maintaining Container Images
As I contemplate moving my “thing” into production, I’m anticipating aspects of the application that need maintenance and how this can be automated.
I’d been negligent in the maintenance of some of my container images.
I’m using mostly Go and some Rust as the basis of static(ally-compiled) binaries that run in these containers but not every container has a base image of scratch
. scratch
is the only base image that doesn’t change and thus the only base image that doesn’t require that container images buit FROM
it, be maintained.
Podman
I’ve read about Podman and been intrigued by it but never taken the time to install it and play around. This morning, walking with my dog, I listened to the almost-always-interesting Kubernetes Podcast and two of the principals behind Podman were on the show to discuss it.
I decided to install it and use it in this week’s project.
Here’s a working Podman deployment for gcp-oidc-token-proxy
ACCOUNT="..."
ENDPOINT="..."
# Can't match container name i.e. prometheus
POD="foo"
SECRET="${ACCOUNT}"
podman secret create ${SECRET} ${PWD}/${ACCOUNT}.json
# Pod publishes pod-port:container-port
podman pod create \
--name=${POD} \
--publish=9091:9090 \
--publish=7776:7777
PROMETHEUS=$(mktemp)
# Important
chmod go+r ${PROMETHEUS}
sed \
--expression="s|some-service-xxxxxxxxxx-xx.a.run.app|${ENDPOINT}|g" \
${PWD}/prometheus.yml > ${PROMETHEUS}
# Prometheus
# Requires --tty
# Can't include --publish but exposes 9090
podman run \
--detach --rm --tty \
--pod=${POD} \
--name=prometheus \
--volume=${PROMETHEUS}:/etc/prometheus/prometheus.yml \
docker.io/prom/prometheus:v2.30.2 \
--config.file=/etc/prometheus/prometheus.yml \
--web.enable-lifecycle
# GCP OIDC Token Proxy
# Can't include --publish but exposes 7777
podman run \
--detach --rm \
--pod=${POD} \
--name=gcp-oidc-token-proxy \
--secret=${SECRET} \
--env=GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/${SECRET} \
ghcr.io/dazwilkin/gcp-oidc-token-proxy:ec8fa9d9ab1b7fa47448ff32e34daa0c3d211a8d \
--port=7777
The prometheus
container includes a volume
mount.
Tag: Domain-Wide Delegation
Delegate domain-wide authority using Golang
I’d not used Google’s Domain-wide Delegation from Golang and struggled to find example code.
Google provides Java and Python samples.
Google has a myriad packages implementing its OAuth security and it’s always daunting trying to determine which one to use.
As it happens, I backed into the solution through client.Options
ctx := context.Background()
// Google Workspace APIS don't use IAM do use OAuth scopes
// Scopes used here must be reflected in the scopes on the
// Google Workspace Domain-wide Delegate client
scopes := []string{ ... }
// Delegates on behalf of this Google Workspace user
subject := "a@google-workspace-email.com"
creds, _ := google.FindDefaultCredentialsWithParams(
ctx,
google.CredentialsParams{
Scopes: scopes,
Subject: subject,
},
)
opts := option.WithCredentials(creds)
service, _ := admin.NewService(ctx, opts)
In this case NewService
applies to Google’s Golang Admin SDK API although the pattern of NewService(ctx)
or NewService(ctx, opts)
where opts
is a option.ClientOption
is consistent across Google’s Golang libraries.
Tag: Google Workspace
Delegate domain-wide authority using Golang
I’d not used Google’s Domain-wide Delegation from Golang and struggled to find example code.
Google provides Java and Python samples.
Google has a myriad packages implementing its OAuth security and it’s always daunting trying to determine which one to use.
As it happens, I backed into the solution through client.Options
ctx := context.Background()
// Google Workspace APIS don't use IAM do use OAuth scopes
// Scopes used here must be reflected in the scopes on the
// Google Workspace Domain-wide Delegate client
scopes := []string{ ... }
// Delegates on behalf of this Google Workspace user
subject := "a@google-workspace-email.com"
creds, _ := google.FindDefaultCredentialsWithParams(
ctx,
google.CredentialsParams{
Scopes: scopes,
Subject: subject,
},
)
opts := option.WithCredentials(creds)
service, _ := admin.NewService(ctx, opts)
In this case NewService
applies to Google’s Golang Admin SDK API although the pattern of NewService(ctx)
or NewService(ctx, opts)
where opts
is a option.ClientOption
is consistent across Google’s Golang libraries.
Tag: Curl
`curl`'ing a Tailscale Webhook
[Tailscale] is really good. I’ve been using it as a virtual private network to span 2 home networks and to securely (!) access my hosts when I’m remote.
Recently Tailscale added Webhook functionality to permit processing subscribed-to (Tailscale) events. I’m always a sucker for a webhook ;-)
Here’s a curl
command to send a test event to a Tailscale Webhook:
URL=""
# From Tailscale's docs
# https://tailscale.com/kb/1213/webhooks/#events-payload
BODY='
[
{
"timestamp": "2022-09-21T13:37:51.658918-04:00",
"version": 1,
"type": "test",
"tailnet": "example.com",
"message": "This is a test event",
"data": null
}
]
'
T=$(date +%s)
V=$(\
printf "${T}.${BODY}" \
| openssl dgst -sha256 -hmac "${SECRET}" -hex -r \
| head --bytes=64)
curl \
--request POST \
--header "Tailscale-Webhook-Signature:t=${T},v1=${V}" \
--header "Content-Type: application/json" \
--data "${BODY}" \
https://${URL}
There must be a better way of extracting the hashed value from the openssl
output.
Playing with GitHub Container Registry REST API
I’ve a day to catch up on blogging. I’m building a “thing” and getting this near to the finish line consumes my time and has meant that I’m not originating anything particularly new. However, there are a couple of tricks in my deployment process that may be of interest to others.
I’ve been a long-term using of Google’s [Cloud Build] and like the simplicity (everything’s a container, alot!). Because I’m using GitHub repos, I’ve been using GitHub Actions to (re)build containers on pushes and GitHub Container registry (GHCR) to store the results. I know that Google provides analogs for GitHub repos and (forces me to use) Artifact Registry (to deploy my Cloud Run services) but even though I dislike GitHub Actions, it’s really easy to do everything in one place.
Tag: Webhook
`curl`'ing a Tailscale Webhook
[Tailscale] is really good. I’ve been using it as a virtual private network to span 2 home networks and to securely (!) access my hosts when I’m remote.
Recently Tailscale added Webhook functionality to permit processing subscribed-to (Tailscale) events. I’m always a sucker for a webhook ;-)
Here’s a curl
command to send a test event to a Tailscale Webhook:
URL=""
# From Tailscale's docs
# https://tailscale.com/kb/1213/webhooks/#events-payload
BODY='
[
{
"timestamp": "2022-09-21T13:37:51.658918-04:00",
"version": 1,
"type": "test",
"tailnet": "example.com",
"message": "This is a test event",
"data": null
}
]
'
T=$(date +%s)
V=$(\
printf "${T}.${BODY}" \
| openssl dgst -sha256 -hmac "${SECRET}" -hex -r \
| head --bytes=64)
curl \
--request POST \
--header "Tailscale-Webhook-Signature:t=${T},v1=${V}" \
--header "Content-Type: application/json" \
--data "${BODY}" \
https://${URL}
There must be a better way of extracting the hashed value from the openssl
output.
Using Google Monitoring Alerting to send Pushover notifications
Table of Contents
Artifacts
- GitHub:
go-gcp-pushover-notificationchannel
- Image:
ghcr.io/dazwilkin/go-gcp-pushover-notificationchannel:220515
Pushover
Logging in to your Pushover account, you will be presented with a summary|dashboard page that includes Your User Key
. Copy the value of this key into a variable called PUSHOVER_USER
Create New Application|API Token
Pushover API has a Pushing Messages method. The documentation describes the format of the HTTP Request. It must be a POST
using TLS (https://
) to https://api.pushover.net/1/messages.json
. The content-type
should be application/json
. In the JSON body of the message, we must include token
(PUSHOVER_TOKEN
), user
(PUSHOVER_USER_KEY
), device
(we’ll use cloud-monitoring
) and a title
and a message
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager
recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1
and v1
examples from the Stackoverflow question as they should be self-explanatory. I suspect (!?) that I should not have used Kubernete’s (API Server’s) CA for the Webhook but it could well be that I just don’t understand the correct approach.
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Tag: Google-Cloud-Platform
The curious cases of the `deleted:serviceaccount`
While testing Firestore export and import yesterday and checking the IAM permissions on a Cloud Storage Bucket, I noticed some Member (member
) values (I think Google refers to these as Principals) were logical but unfamiliar to me:
deleted:serviceAccount:{email}?uid={uid}
I was using gsutil iam get gs://${BUCKET}
because I’d realized (and this is another useful lesson) that, as I’ve been creating daily test projects, I’ve been binding each project’s Firestore Service Account (service-{project-number}@gcp-sa-firestore.iam.gserviceaccount.com
) to a Bucket owned by another Project but I hadn’t been deleting the binding when I deleted the Project.
Firestore Export & Import
I’m using Firestore to maintain state in my “thing”.
In an attempt to ensure that I’m able to restore the database, I run (Cloud Scheduler) scheduled backups (see Automating Scheduled Firestore Exports and I’ve been testing imports to ensure that the process works.
It does.
I thought I’d document an important but subtle consideration with Firestore exports (which I’d not initially understood).
Google facilitates that backup process with the sibling commands:
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
Tag: Iam
The curious cases of the `deleted:serviceaccount`
While testing Firestore export and import yesterday and checking the IAM permissions on a Cloud Storage Bucket, I noticed some Member (member
) values (I think Google refers to these as Principals) were logical but unfamiliar to me:
deleted:serviceAccount:{email}?uid={uid}
I was using gsutil iam get gs://${BUCKET}
because I’d realized (and this is another useful lesson) that, as I’ve been creating daily test projects, I’ve been binding each project’s Firestore Service Account (service-{project-number}@gcp-sa-firestore.iam.gserviceaccount.com
) to a Bucket owned by another Project but I hadn’t been deleting the binding when I deleted the Project.
Tag: Service-Accounts
The curious cases of the `deleted:serviceaccount`
While testing Firestore export and import yesterday and checking the IAM permissions on a Cloud Storage Bucket, I noticed some Member (member
) values (I think Google refers to these as Principals) were logical but unfamiliar to me:
deleted:serviceAccount:{email}?uid={uid}
I was using gsutil iam get gs://${BUCKET}
because I’d realized (and this is another useful lesson) that, as I’ve been creating daily test projects, I’ve been binding each project’s Firestore Service Account (service-{project-number}@gcp-sa-firestore.iam.gserviceaccount.com
) to a Bucket owned by another Project but I hadn’t been deleting the binding when I deleted the Project.
Tag: Cloud-Firestore
Firestore Export & Import
I’m using Firestore to maintain state in my “thing”.
In an attempt to ensure that I’m able to restore the database, I run (Cloud Scheduler) scheduled backups (see Automating Scheduled Firestore Exports and I’ve been testing imports to ensure that the process works.
It does.
I thought I’d document an important but subtle consideration with Firestore exports (which I’d not initially understood).
Google facilitates that backup process with the sibling commands:
Tag: Github
Basic programmatic access to GitHub Issues
It’s been a while!
I’ve been spending time writing Bash scripts and a web site but neither has been sufficiently creative that I’ve felt worth a blog post.
As I’ve been finalizing the web site, I needed an Issue Tracker and decided to leverage GitHub(’s Issues).
As a former Googler, I’m familiar with Google’s (excellent) internal issue tracking tool (Buganizer) and it’s public manifestation Issue Tracker. Google documents Issue Tracker and its Issue type which I’ve mercilessly plagiarized in my implementation.
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
GitHub Actions && GitHub Container Registry
You know when you start something and then regret it!? I think I’ll be sticking with Google Cloud Build; GitHub Actions appears functional and useful but I found the documentation to be confusing and limited and struggled to get a simple container image build|push working.
I’ve long used Docker Hub but am planning to use it less as a result of the planned changes. I want to see Docker succeed and to do so it needs to find a way to make money but, there are free alternatives including the new GitHub Container Registry and the very very cheap Google Container Registry.
Tag: Vke
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Tag: Vultr
Secure (TLS) gRPC services with VKE
NOTE
cert-manager
is a better solution to what follows.
I’ve a need to deploy a Vultr Kubernetes Engine (VKE) cluster on a daily basis (create and delete within a few hours) and expose (securely|TLS) a gRPC service.
I have an existing solution Automatic Certs w/ Golang gRPC service on Compute Engine that combines a gRPC Healthchecking and an ACME service and decided to reuse this.
In order for it work, we need:
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Tag: Cli
Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Tag: Api
Automating HackMD documents
I was introduced to HackMD while working on an open-source project. It’s a collaborative editing tool for Markdown documents and there’s an API
I wanted to be able to programmatically edit one of my documents with a daily update. The API is easy-to-use and my only challenge was futzing with escape characters in bash strips representing the document Markdown content.
You’ll need an account with HackMD and an to Create API Token that I’ll refer to as TOKEN
.
Tag: Hackmd
Automating HackMD documents
I was introduced to HackMD while working on an open-source project. It’s a collaborative editing tool for Markdown documents and there’s an API
I wanted to be able to programmatically edit one of my documents with a daily update. The API is easy-to-use and my only challenge was futzing with escape characters in bash strips representing the document Markdown content.
You’ll need an account with HackMD and an to Create API Token that I’ll refer to as TOKEN
.
Tag: Markdown
Automating HackMD documents
I was introduced to HackMD while working on an open-source project. It’s a collaborative editing tool for Markdown documents and there’s an API
I wanted to be able to programmatically edit one of my documents with a daily update. The API is easy-to-use and my only challenge was futzing with escape characters in bash strips representing the document Markdown content.
You’ll need an account with HackMD and an to Create API Token that I’ll refer to as TOKEN
.
Tag: Pushover
Using Google Monitoring Alerting to send Pushover notifications
Table of Contents
Artifacts
- GitHub:
go-gcp-pushover-notificationchannel
- Image:
ghcr.io/dazwilkin/go-gcp-pushover-notificationchannel:220515
Pushover
Logging in to your Pushover account, you will be presented with a summary|dashboard page that includes Your User Key
. Copy the value of this key into a variable called PUSHOVER_USER
Create New Application|API Token
Pushover API has a Pushing Messages method. The documentation describes the format of the HTTP Request. It must be a POST
using TLS (https://
) to https://api.pushover.net/1/messages.json
. The content-type
should be application/json
. In the JSON body of the message, we must include token
(PUSHOVER_TOKEN
), user
(PUSHOVER_USER_KEY
), device
(we’ll use cloud-monitoring
) and a title
and a message
Tag: Dns
Cloud Run custom domain mappings
I have several Cloud Run services that I want to map to a domain.
During development, I create a Google Cloud Platform (GCP) project each day into which everything is deployed. This means that, every day, the Cloud Run services have newly non-inferable (to me) URLs. I thought this would be tedious to manage because:
- My DNS service isn’t programmable (I know!)
- Cloud Run services have non-inferable (by me) URLs
i.e. I thought I’d have to manually update the DNS entries each day.
Tag: Cloud-Scheduler
Automating Scheduled Firestore Exports
For my “thing”, I use Firestore to persist state. I like Firestore a lot and, having been around Google for almost (!) a decade, I much prefer it to Datastore.
Firestore has a managed export|import service and I use this to backup Firestore collections|documents.
I’d been doing backups manually (using gcloud
) and decided today to take the plunge and use Cloud Scheduler for the first time. I’d been reluctant to do this until now because I’d assumed incorrectly that I’d need to write a wrapping service to invoke the export.
Tag: Ghcr
Playing with GitHub Container Registry REST API
I’ve a day to catch up on blogging. I’m building a “thing” and getting this near to the finish line consumes my time and has meant that I’m not originating anything particularly new. However, there are a couple of tricks in my deployment process that may be of interest to others.
I’ve been a long-term using of Google’s [Cloud Build] and like the simplicity (everything’s a container, alot!). Because I’m using GitHub repos, I’ve been using GitHub Actions to (re)build containers on pushes and GitHub Container registry (GHCR) to store the results. I know that Google provides analogs for GitHub repos and (forces me to use) Artifact Registry (to deploy my Cloud Run services) but even though I dislike GitHub Actions, it’s really easy to do everything in one place.
Sigstore
I’ve been on a digression (gcp-oidc-token-proxy
) this week. Yesterday I began exploring Podman and wrote briefly about running gcp-oidc-token-proxy
on my localhost using it.
This morning while walking with my dog, I listened to Google’s Dan Lorenc explain Sigstore (blog](https://blog.sigstore.dev/)) on The Kubelist Podcast1
The plan today is to try to sign the gcp-oidc-token-proxy
container images in GitHub Container Registry.
NOTE I decided against trying the hardware key approach. I have a Google Titan key and only Yubikeys are well-tested by
go-piv
GitHub Actions && GitHub Container Registry
You know when you start something and then regret it!? I think I’ll be sticking with Google Cloud Build; GitHub Actions appears functional and useful but I found the documentation to be confusing and limited and struggled to get a simple container image build|push working.
I’ve long used Docker Hub but am planning to use it less as a result of the planned changes. I want to see Docker succeed and to do so it needs to find a way to make money but, there are free alternatives including the new GitHub Container Registry and the very very cheap Google Container Registry.
Tag: Sigstore
Playing with GitHub Container Registry REST API
I’ve a day to catch up on blogging. I’m building a “thing” and getting this near to the finish line consumes my time and has meant that I’m not originating anything particularly new. However, there are a couple of tricks in my deployment process that may be of interest to others.
I’ve been a long-term using of Google’s [Cloud Build] and like the simplicity (everything’s a container, alot!). Because I’m using GitHub repos, I’ve been using GitHub Actions to (re)build containers on pushes and GitHub Container registry (GHCR) to store the results. I know that Google provides analogs for GitHub repos and (forces me to use) Artifact Registry (to deploy my Cloud Run services) but even though I dislike GitHub Actions, it’s really easy to do everything in one place.
Sigstore
I’ve been on a digression (gcp-oidc-token-proxy
) this week. Yesterday I began exploring Podman and wrote briefly about running gcp-oidc-token-proxy
on my localhost using it.
This morning while walking with my dog, I listened to Google’s Dan Lorenc explain Sigstore (blog](https://blog.sigstore.dev/)) on The Kubelist Podcast1
The plan today is to try to sign the gcp-oidc-token-proxy
container images in GitHub Container Registry.
NOTE I decided against trying the hardware key approach. I have a Google Titan key and only Yubikeys are well-tested by
go-piv
Tag: Acme
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Tag: Compute-Engine
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Tag: Service-Discovery
Prometheus HTTP Service Discovery of Cloud Run services
Some time ago, I wrote about using Prometheus Service Discovery w/ Consul for Cloud Run and also Scraping metrics exposed by Google Cloud Run services that require authentication. Both solutions remain viable but they didn’t address another use case for Prometheus and Cloud Run services that I have with a “thing” that I’ve been building.
In this scenario, I want to:
- Configure Prometheus to scrape Cloud Run service metrics
- Discover Cloud Run services dynamically
- Authenticate to Cloud Run using Firebase Auth ID tokens
These requirements and – one other – present several challenges:
Consul discovers Google Cloud Run
I’ve written a basic discoverer
of Google Cloud Run services. This is for a project and it extends work done in some previous posts to Multiplex gRPC and Prometheus with Cloud Run and to use Consul for Prometheus service discovery.
This solution:
- Accepts a set of Google Cloud Platform (GCP) projects
- Trawls them for Cloud Run services
- Assumes that the services expose Prometheus metrics on
:443/metrics
- Relabels the services
- Surfaces any discovered Cloud Run services’ metrics in Prometheus
You’ll need Docker and Docker Compose.
Prometheus Service Discovery w/ Consul for Cloud Run
I’m working on a project that will programmatically create Google Cloud Run services and I want to be able to dynamically discover these services using Prometheus.
This is one solution.
NOTE Google Cloud Run is the service I’m using, but the principle described herein applies to any runtime service that you’d wish to use.
Why is this challenging? IIUC, it’s primarily because Prometheus has a limited set of plugins for service discovery, see the sections that include _sd_
in Prometheus Configuration documentation. Unfortunately, Cloud Run is not explicitly supported. The alternative appears to be to use file-based discovery but this seems ‘challenging’; it requires, for example, reloading Prometheus on file changes.
Tag: Lets-Encrypt
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.
Tag: Firebase-Auth
Firebase Auth authorized domains
I’m using Firebase Authentication in a project to authenticate users of various OAuth2 identity systems. Firebase Authentication requires a set of Authorized Domains.
The (web) app that interacts with Firebase Authentication is deployed to Cloud Run. The Authorized Domains list must include the app’s Cloud Run service URL.
Cloud Run service URLs vary by Project (ID). They are a combination of the service name, a hash (?) of the Project (ID) and .a.run.app
.
Tag: Gcloud
Using `gcloud ... --format` with arbitrary returned data
If you use jq
, you’ll know that, its documentation uses examples that you can try locally or using the excellent jqplay
:
printf "[1,2,3]" | jq .[1:]
[
2,
3
]
And here
If you use Google Cloud Platform (GCP) CLI, gcloud
, this powerful tool includes JSON output formatting of results (--format=json
) and YAML (--format=yaml
) etc. and includes a set of so-called projections that you can use to format the returned data.
There is a comparable slice
projection that you may use with gcloud
and the documentation even includes an example:
Tag: Go-Logr
Golang Structured Logging w/ Google Cloud Logging (2)
UPDATE There’s an issue with my naive implementation of
RenderValuesHook
as described in this post. I summarized the problem in this issue where I’ve outlined (hopefully) a more robust solution.
Recently, I described how to configure Golang logging so that user-defined key-values applied to the logs are parsed when ingested by Google Cloud Logging.
Here’s an example of what we’re trying to achieve. This is an example Cloud Logging log entry that incorporates user-defined labels (see dog:freddie
and foo:bar
) and a readily-querable jsonPayload
:
Tag: Logr
Golang Structured Logging w/ Google Cloud Logging (2)
UPDATE There’s an issue with my naive implementation of
RenderValuesHook
as described in this post. I summarized the problem in this issue where I’ve outlined (hopefully) a more robust solution.
Recently, I described how to configure Golang logging so that user-defined key-values applied to the logs are parsed when ingested by Google Cloud Logging.
Here’s an example of what we’re trying to achieve. This is an example Cloud Logging log entry that incorporates user-defined labels (see dog:freddie
and foo:bar
) and a readily-querable jsonPayload
:
Golang Structured Logging w/ Google Cloud Logging
I’ve multiple components in an app and these are deployed across multiple Google Cloud Platform (GCP) services: Kubernetes Engine, Cloud Functions, Cloud Run, etc. Almost everything is written in Golang and I started the project using go-logr
.
logr
is in two parts: a Logger
that you use to write log entries; a LogSink
(adaptor) that consumes log entries and outputs them to a specific log implementation.
Initially, I defaulted to using stdr
which is a LogSink
for Go’s standard logging implementation. Something similar to the module’s example:
Tag: Oauth
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Tag: Oauth2
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Tag: Oidc
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Tag: Token
Scraping metrics exposed by Google Cloud Run services that require authentication
I’ve written a solution (gcp-oidc-token-proxy
) that can be used in conjunction with Prometheus OAuth2 to authenticate requests so that Prometheus can scrape metrics exposed by e.g. Cloud Run services that require authentication. The solution resulted from my question on Stack overflow.
Problem #1: Endpoint requires authentication
Given a Cloud Run service URL for which:
ENDPOINT="my-server-blahblah-wl.a.run.app"
# Returns 200 when authentication w/ an ID token
TOKEN="$(gcloud auth print-identity-token)"
curl \
--silent \
--request GET \
--header "Authorization: Bearer ${TOKEN}" \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
# Returns 403 otherwise
curl \
--silent \
--request GET \
--write-out "%{response_code}" \
--output /dev/null \
https://${ENDPOINT}/metrics
Problem #2: Prometheus OAuth2 configuration is constrained
Tag: Zerolog
Golang Structured Logging w/ Google Cloud Logging
I’ve multiple components in an app and these are deployed across multiple Google Cloud Platform (GCP) services: Kubernetes Engine, Cloud Functions, Cloud Run, etc. Almost everything is written in Golang and I started the project using go-logr
.
logr
is in two parts: a Logger
that you use to write log entries; a LogSink
(adaptor) that consumes log entries and outputs them to a specific log implementation.
Initially, I defaulted to using stdr
which is a LogSink
for Go’s standard logging implementation. Something similar to the module’s example:
Tag: Dependabot
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
Tag: Dependencies
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
Tag: Modules
GitHub help with dependency management
This is very useful:
I am building an application that comprises multiple repos. I continue to procrastinate on whether using multiple repos vs. a monorepo was a good idea but, an issue that I have (had) is the need to ensure that the repos’ contents are using current|latest modules. GitHub can help.
Most of the application is written in Golang with a smattering of Rust and some JavaScript.
Cloud Build wishlist: Mountable Golang Modules Proxy
I think it would be valuable if Google were to provide volumes in Cloud Build of package registries (e.g. Go Modules; PyPi; Maven; NPM etc.).
Google provides a mirror of a subset of Docker Hub. This confers several benefits: Google’s imprimatur; speed (latency); bandwidth; and convenience.
The same benefits would apply to package registries.
In the meantime, there’s a hacky way to gain some of the benefits of these when using Cloud Build.
In the following example, I’ll show an approach using Golang Modules and Google’s module proxy aka proxy.golang.org
.
Tag: Kubectl
`gcloud beta run services replace`
TL;DR I’m working on a project that includes multiple Cloud Run services. I’ve been putting my
gcloud
head on to deploy these services thinking that it’s curious there’s no way to write the specs as YAML configs. Today, I learned that there is:gcloud beta run services replace
.
What prompted the discovery was some frustration trying to deploy a JSON-valued environment variable to Cloud Run:
local FIREBASE_CONFIG="{
apiKey: ${FIREBASE_API_KEY},
authDomain: ${FIREBASE_AUTH_DOMAIN},
projectId: ${FIREBASE_PROJECT},
storageBucket: ${FIREBASE_STORAGE_BUCKET},
messagingSenderId: ${FIREBASE_MESSAGING_SENDER},
appId: ${FIREBASE_APP}}"
gcloud run deploy ${SRV_NAME} \
--image=${IMAGE} \
--command="/server" \
--args="--endpoint=:${PORT}" \
--set-env-vars=FIREBASE_CONFIG="${FIREBASE_CONFIG}" \
--max-instances=1 \
--memory=256Mi \
--ingress=all \
--platform=managed \
--port=${PORT} \
--allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
gcloud
balks at this.
Tag: YAML
`gcloud beta run services replace`
TL;DR I’m working on a project that includes multiple Cloud Run services. I’ve been putting my
gcloud
head on to deploy these services thinking that it’s curious there’s no way to write the specs as YAML configs. Today, I learned that there is:gcloud beta run services replace
.
What prompted the discovery was some frustration trying to deploy a JSON-valued environment variable to Cloud Run:
local FIREBASE_CONFIG="{
apiKey: ${FIREBASE_API_KEY},
authDomain: ${FIREBASE_AUTH_DOMAIN},
projectId: ${FIREBASE_PROJECT},
storageBucket: ${FIREBASE_STORAGE_BUCKET},
messagingSenderId: ${FIREBASE_MESSAGING_SENDER},
appId: ${FIREBASE_APP}}"
gcloud run deploy ${SRV_NAME} \
--image=${IMAGE} \
--command="/server" \
--args="--endpoint=:${PORT}" \
--set-env-vars=FIREBASE_CONFIG="${FIREBASE_CONFIG}" \
--max-instances=1 \
--memory=256Mi \
--ingress=all \
--platform=managed \
--port=${PORT} \
--allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
gcloud
balks at this.
Tag: Terraform
Infrastructure as Code
Problem
I’m building an application that comprises:
- Kubernetes¹
- Kubernetes Operator
- Cloud Firestore
- Cloud Functions
- Cloud Run
- Cloud Endpoints
- Stripe
- Firebase Authentication
¹ - I’m using Google Kubernetes Engine (GKE) but may include other managed Kubernetes offerings (e.g. Digital Ocean, Linode, Oracle). GKE clusters are manageable by
gcloud
but other platforms require other CLI tools. All are accessible from bash but are these supported by e.g. Terraform (see below)?
Many of the components are packaged as container images and, because I’m using GitHub to host the project’s repos (I’ll leave the monorepo discussion for another post), I’ve become inculcated and use GitHub Container Registry (GHCR) as the container repo.
Tag: Cloud-Endpoints
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!
Firebase Authentication, Cloud Endpoints and gRPC (1of2)
I’m building a service that requires user authentication. The primary endpoint is a gRPC-based service. I would like to consider using certificate-based auth but this feels… challenging. Instead, I have been aware of, but never used, Firebase Authentication and was interested to see that Cloud Endpoints includes Firebase Authentication as one of its supported auth mechanisms. Curiosity piqued, I confirmed that gRPC supports Google token-based authentication.
The following is a summary of what I did but I’ll leave the extensive documentation to Google, (Google’s) Firebase and gRPC, all of which, in this case, provide really good explanations.
Cloud Endpoints combine OpenAPI and gRPC... or not!
See:
- Multiplexing gRPC and HTTP endpoints with Cloud Run
- gRPC, Cloud Run & Endpoints
- ESPv2: Configure Cloud Endpoints to proxy traffic to a Cloud Run multiplexed (gRPC|HTTP) service
Challenges:
- Cloud Run permits single port
- Cloud Run services publishing e.g. gRPC and Prometheus, must multiplex transports
- Cloud Run services publishing multiplexed transports are challenging to expose using Cloud Endpoints
Hypothesis #1: Multiplexed transports work with Cloud Run
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
Tag: Firebase-Authentication
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
Tag: Id-Token
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
Tag: Jwt
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
Tag: Refresh-Token
Renewing Firebase Authentication ID tokens with gRPC
I’ve written before about a project in which I’m using Firebase Authentication in combination with Google Cloud Endpoints and a gRPC service running on Cloud Run:
- Firebase Authentication, Cloud Endpoints and gRPC (1of2)
- Firebase Authentication, Cloud Endpoints and gRPC (2of2)
This works well with one caveat, the ID tokens (JWTs) minted by Firebase Authentication have a 3600 second (one hour) lifetime.
The user flow in my app is that whenever the user invokes the app’s CLI:
Tag: In-Memory
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Tag: Interceptor
gRPC Interceptors and in-memory gRPC connections
For… reasons, I wanted to pre-filter gRPC requests to check for authorization. Authorization is implemented as a ‘micro-service’ and I wanted the authorization server to run in the same process as the gRPC client.
TL;DR:
- Shiju’s “Writing gRPC Interceptors in Go” is great
- This Stack overflow answer ostensibly for writing unit tests for gRPC got me an in-process server
What follows stands on these folks’ shoulders…
A key motivator for me to write blog posts is that helps me ensure that I understand things. Writing this post, I realized I’d not researched gRPC Interceptors and, as luck would have it, I found some interesting content, not on grpc.io
but on the grpc-ecosystem
repo, specifically Go gRPC middleware. But, I refer again to Shiju’s clear and helpful “Writing gRPC Interceptors in Go”
Tag: Javascript
Stripe
It’s been almost a month since my last post. I’ve been occupied learning Stripe and integrating it into an application that I’m developing. The app benefits from a billing mechanism for prospective customers and, as far as I can tell, Stripe is the solution. I’d be interested in hearing perspectives on alternatives.
As with any platform, there’s good and bad and I’ll summarize my perspective on Stripe here. It’s been some time since I developed in JavaScript and this lack of familiarity has meant that the solution took longer than I wanted to develop. That said, before this component, I developed integration with Firebase Authentication and that required JavaScript’ing too and that was much easier (and more enjoyable).
Tag: Stripe
Stripe
It’s been almost a month since my last post. I’ve been occupied learning Stripe and integrating it into an application that I’m developing. The app benefits from a billing mechanism for prospective customers and, as far as I can tell, Stripe is the solution. I’d be interested in hearing perspectives on alternatives.
As with any platform, there’s good and bad and I’ll summarize my perspective on Stripe here. It’s been some time since I developed in JavaScript and this lack of familiarity has meant that the solution took longer than I wanted to develop. That said, before this component, I developed integration with Firebase Authentication and that required JavaScript’ing too and that was much easier (and more enjoyable).
Tag: Firebase
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!
Firebase Authentication, Cloud Endpoints and gRPC (1of2)
I’m building a service that requires user authentication. The primary endpoint is a gRPC-based service. I would like to consider using certificate-based auth but this feels… challenging. Instead, I have been aware of, but never used, Firebase Authentication and was interested to see that Cloud Endpoints includes Firebase Authentication as one of its supported auth mechanisms. Curiosity piqued, I confirmed that gRPC supports Google token-based authentication.
The following is a summary of what I did but I’ll leave the extensive documentation to Google, (Google’s) Firebase and gRPC, all of which, in this case, provide really good explanations.
Tag: Firebase-Authentication"
Firebase Authentication, Cloud Endpoints and gRPC (2of2)
Earlier this week, I wrote about using Firebase Authentcation, Cloud Endpoints and gRPC (1of2). Since then, I learned some more and added a gRPC interceptor to implement basic authorization for the service.
ESPv2 --allow-unauthenticated
The Cloud Enpoints (ESPv2) proxy must be run as --allow-unauthenticated
on Cloud Run to ensure that requests make it to the proxy where the request is authenticated and only authenticated requests make it on to the backend service. Thanks Google’s Teju Nareddy!
Firebase Authentication, Cloud Endpoints and gRPC (1of2)
I’m building a service that requires user authentication. The primary endpoint is a gRPC-based service. I would like to consider using certificate-based auth but this feels… challenging. Instead, I have been aware of, but never used, Firebase Authentication and was interested to see that Cloud Endpoints includes Firebase Authentication as one of its supported auth mechanisms. Curiosity piqued, I confirmed that gRPC supports Google token-based authentication.
The following is a summary of what I did but I’ll leave the extensive documentation to Google, (Google’s) Firebase and gRPC, all of which, in this case, provide really good explanations.
Tag: Consul
Consul discovers Google Cloud Run
I’ve written a basic discoverer
of Google Cloud Run services. This is for a project and it extends work done in some previous posts to Multiplex gRPC and Prometheus with Cloud Run and to use Consul for Prometheus service discovery.
This solution:
- Accepts a set of Google Cloud Platform (GCP) projects
- Trawls them for Cloud Run services
- Assumes that the services expose Prometheus metrics on
:443/metrics
- Relabels the services
- Surfaces any discovered Cloud Run services’ metrics in Prometheus
You’ll need Docker and Docker Compose.
Prometheus Service Discovery w/ Consul for Cloud Run
I’m working on a project that will programmatically create Google Cloud Run services and I want to be able to dynamically discover these services using Prometheus.
This is one solution.
NOTE Google Cloud Run is the service I’m using, but the principle described herein applies to any runtime service that you’d wish to use.
Why is this challenging? IIUC, it’s primarily because Prometheus has a limited set of plugins for service discovery, see the sections that include _sd_
in Prometheus Configuration documentation. Unfortunately, Cloud Run is not explicitly supported. The alternative appears to be to use file-based discovery but this seems ‘challenging’; it requires, for example, reloading Prometheus on file changes.
Tag: Http
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo:
Tag: Merge
Firestore Golang Timestamps & Merging
I’m using Google’s Golang SDK for Firestore. The experience is excellent and I’m quickly becoming a fan of Firestore. However, as a Golang Firestore developer, I’m feeling less loved and some of the concepts in the database were causing me a conundrum.
I’m still not entirely certain that I have Timestamps nailed but… I learned an important lesson on the auto-creation of Timestamps in documents and how to retain these values.
Tag: Timestamp
Firestore Golang Timestamps & Merging
I’m using Google’s Golang SDK for Firestore. The experience is excellent and I’m quickly becoming a fan of Firestore. However, as a Golang Firestore developer, I’m feeling less loved and some of the concepts in the database were causing me a conundrum.
I’m still not entirely certain that I have Timestamps nailed but… I learned an important lesson on the auto-creation of Timestamps in documents and how to retain these values.
Tag: Coredns
Prometheus Service Discovery w/ Consul for Cloud Run
I’m working on a project that will programmatically create Google Cloud Run services and I want to be able to dynamically discover these services using Prometheus.
This is one solution.
NOTE Google Cloud Run is the service I’m using, but the principle described herein applies to any runtime service that you’d wish to use.
Why is this challenging? IIUC, it’s primarily because Prometheus has a limited set of plugins for service discovery, see the sections that include _sd_
in Prometheus Configuration documentation. Unfortunately, Cloud Run is not explicitly supported. The alternative appears to be to use file-based discovery but this seems ‘challenging’; it requires, for example, reloading Prometheus on file changes.
Tag: Cloud-Functions
Cloud Firestore Triggers in Golang
I was pleased to discover that Google provides a non-Node.JS mechanism to subscribe to and act upon Firestore triggers, Google Cloud Firestore Triggers. I’ve nothing against Node.JS but, for the project i’m developing, everything else is written in Golang. It’s good to keep it all in one language.
I’m perplexed that Cloud Functions still (!) only supports Go 1.13 (03-Sep-2019). Even Go 1.14 (25-Feb-2020) was released pre-pandemic and we’re now running on 1.16. Come on Google!
webmention
Let’s see if this works!?
I’ve added the following link to this site’s baseof.html
so that it is now rendered for each page:
<link
href="https://us-central1-webmention.cloudfunctions.net/webmention"
rel="webmention" />
I discovered indieweb yesterday reading about webmention. Who knows what got me to webmention!?
The principles of both are sound. Instead of relying on come-go behemoths to run our digital world, indieweb seeks to provide technologies that enable us to achieve things without them. webmention is a cross-walled-garden mechanism to perform an evolved form of pingbacks. If X references one of Y’s posts, X’s server notifies Y’s server during publication through a webmention service and, Y can then decide to add reference counts of copies of the referring link to their page. Clever.
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
Tag: Healthcheck
Fly.io
I spent some time over the weekend understanding Fly.io. It’s always fascinating to me how many smart people are building really neat solutions. Fly.io is subtly different to other platforms that I use (Kubernetes, GCP, DO, Linode) and I’ve found the Fly.io team to be highly responsive and helpful to my noob questions.
One of the team’s posts, Docker without Docker surfaced in my Feedly feed (hackernews) and it piqued my interest.
Tag: Emulator
Using Golang with the Firestore Emulator
This works great but it wasn’t clearly documented for non-Firebase users. I assume it will work, as well, for any of the client libraries (not just Golang).
Assuming you have some (Golang) code (in this case using the Google Cloud Client Library) that interacts with a Firestore database. Something of the form:
package main
import (
"context"
"crypto/sha256"
"fmt"
"log"
"os"
"time"
"cloud.google.com/go/firestore"
)
func hash(s string) string {
h := sha256.New()
h.Write([]byte(s))
return fmt.Sprintf("%x", h.Sum(nil))
}
type Dog struct {
Name string `firestore:"name"`
Age int `firestore:"age"`
Human *firestore.DocumentRef `firestore:"human"`
Created time.Time `firestore:"created"`
}
func NewDog(name string, age int, human *firestore.DocumentRef) Dog {
return Dog{
Name: name,
Age: age,
Human: human,
Created: time.Now(),
}
}
func (d *Dog) ID() string {
return hash(d.Name)
}
type Human struct {
Name string `firestore:"name"`
}
func (h *Human) ID() string {
return hash(h.Name)
}
func main() {
ctx := context.Background()
project := os.Getenv("PROJECT")
client, err := firestore.NewClient(ctx, project)
if value := os.Getenv("FIRESTORE_EMULATOR_HOST"); value != "" {
log.Printf("Using Firestore Emulator: %s", value)
}
if err != nil {
log.Fatal(err)
}
defer client.Close()
me := Human{
Name: "me",
}
meDocRef := client.Collection("humans").Doc(me.ID())
if _, err := meDocRef.Set(ctx, me); err != nil {
log.Fatal(err)
}
freddie := NewDog("Freddie", 2, meDocRef)
freddieDocRef := client.Collection("dogs").Doc(freddie.ID())
if _, err := freddieDocRef.Set(ctx, freddie); err != nil {
log.Fatal(err)
}
}
Then you can interact instead with the Firestore Emulator.
Tag: Run
Programmatically deploying Cloud Run services (Golang|Python)
Phew! Programmitcally deploying Cloud Run services should be easy but it didn’t find it so.
My issues were that the Cloud Run Admin (!) API is poorly documented and it uses non-standard endpoints (thanks Sal!). Here, for others who may struggle with this, is how I got this to work.
Goal
Programmatically (have Golang, Python, want Rust) deploy services to Cloud Run.
i.e. achieve this:
gcloud run deploy ${NAME} \
--image=${IMAGE} \
--platform=managed \
--no-allow-unauthenticated \
--region=${REGION} \
--project=${PROJECT}
TRICK
--log-http
is your friend
Tag: Kube-Metrics
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Tag: Virtualpodautoscaler
Prometheus VPA Recommendations
Phew!
I was interested in learning how to Manage Resources for Containers. On the way, I learned and discovered:
kubectl top
- Vertical Pod Autoscaler
- A (valuable) digression through PodMonitor
kube-state-metrics
- `kubectl-patch
- Created a Graph
- References
Kubernetes Resources
Visual Studio Code has begun to bug me (reasonably) to add resources
to Kubernetes manifests.
E.g.:
resources:
limits:
cpu: "1"
memory: "512Mi"
I’ve been spending time with Deislab’s Akri and decided to determine whether Akri’s primary resources (Agent, Controller) and some of my creations HTTP Device and Discovery, were being suitably constrained.
Tag: Dapr
Dapr
It’s a good name, I read it as “dapper” but I frequently type “darp” :-(
Was interested to read that Dapr is now v1.0 and decided to check it out. I was initially confused between Dapr and service mesh functionality. But, having used Dapr, it appears to be more focused in aiding the development of (cloud-native) (distributed) apps by providing developers with abstractions for e.g. service discovery, eventing, observability whereas service meshes feel (!) more oriented towards simplifying the deployment of existing apps. Both use the concept of proxies, deployed alongside app components (as sidecars on Kubernetes) to provide their functionality to apps.
Tag: Deislabs
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
Tag: Krustlet
Krustlet on DO Managed Kubernetes
I’ve spent time this week returning to the interesting Deislabs project Krustlet. Since the last time, the bootstrapping process has been simplified using Kubernetes Bootstrap Tokens. I know this updated process works with MicroK8s. Unfortunately, I’m struggling with it on GKE and thought I’d try DigitalOcean Managed Kubernetes.
It worked first time!
In the following, we run both the Kubernetes cluster and the Krustlet Droplet on DigitalOcean but, as long as the cluster and the VM are able to communicate with one another, you should be able to run these anywhere.
waPC & Protobufs
I’m hacking around with a solution that combines WASM and Google Trillian.
Ultimately, I’d like to be able to ship WASM (binaries) to a Trillian personality and then invoke (exported) functions on them. Some this was borne from the interesting exploration of Krustlet and its application of wascc.
I’m still booting into WASM but it’s a very interesting technology that has most interesting potential outside the browser. Some folks have been trailblazing the technology and I have been reading Kevin Hoffman’s medium and wascc (nee waxosuit) work. From this, I stumbled upon Kevin’s waPC and I’m using waPC in this prototyping as a way to exchange data between clients and servers running WASM binaries.
Tag: Akri
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager
recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1
and v1
examples from the Stackoverflow question as they should be self-explanatory. I suspect (!?) that I should not have used Kubernete’s (API Server’s) CA for the Webhook but it could well be that I just don’t understand the correct approach.
Kubernetes Webhooks
I spent some time last week writing my first admission webhook for Kubernetes. I wrote the handler in Golang because I’m most familiar with Golang and because, as Kubernetes’ native language, I was more confident that the necessary SDKs would exist and that the documentation would likely use Golang by default. I struggled to find useful documentation and so this post is to help you (and me!) remember how to do this next time!
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf
protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration
) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Akri
For the past couple of weeks, I’ve been playing around with Akri, a Microsoft (DeisLabs) project for building a connected edge with Kubernetes. Kubernetes, IoT, Rust (and Golang) make this all compelling to me.
Initially, I deployed an Akri End-to-End to MicroK8s on Google Compute Engine (link) and Digital Ocean (link). But I was interested to create me own example and so have proposed a very (!) simple HTTP-based protocol.
This blog summarizes my thoughts about Akri and an explanation of the HTTP protocol implementation in the hope that this helps others.
GitHub Actions && GitHub Container Registry
You know when you start something and then regret it!? I think I’ll be sticking with Google Cloud Build; GitHub Actions appears functional and useful but I found the documentation to be confusing and limited and struggled to get a simple container image build|push working.
I’ve long used Docker Hub but am planning to use it less as a result of the planned changes. I want to see Docker succeed and to do so it needs to find a way to make money but, there are free alternatives including the new GitHub Container Registry and the very very cheap Google Container Registry.
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Tag: Cert-Manager
Kubernetes cert-manager
I developed an admission webhook for Akri, twice (Golang, Rust). I naively followed other examples for the generation of the certificates, created a 1.20 cluster and broke that process.
I’d briefly considered using cert-manager
recently but quickly abandoned the idea thinking it would be onerous and unnecessary complexity for little-old-me. I was wrong. It’s excellent and I recommend it highly.
I won’t reproduce the v1beta1
and v1
examples from the Stackoverflow question as they should be self-explanatory. I suspect (!?) that I should not have used Kubernete’s (API Server’s) CA for the Webhook but it could well be that I just don’t understand the correct approach.
Tag: Device-Plugins"
Kubernetes Device Plugins
I’m debugging an issue with Akri Zeroconf
protocol in which Instance environment variables are no longer (!) being surfaced within the Broker pods. In my adventures, it seemed useful to better understand how Akri works and specifically, how Akri uses Kubernetes Device Plugins.
IIUC plugins register with the Kubelet (!) via a gRPC service (Registration
) that the Kubelet exposes on a UNIX socket at /var/lib/kubelet/device-plugins/kubelet.sock
Then (!) if successful, devices should be reported by the Node’s metadata (spec) and available to be bound to Pods.
Tag: Indieweb
webmention
Let’s see if this works!?
I’ve added the following link to this site’s baseof.html
so that it is now rendered for each page:
<link
href="https://us-central1-webmention.cloudfunctions.net/webmention"
rel="webmention" />
I discovered indieweb yesterday reading about webmention. Who knows what got me to webmention!?
The principles of both are sound. Instead of relying on come-go behemoths to run our digital world, indieweb seeks to provide technologies that enable us to achieve things without them. webmention is a cross-walled-garden mechanism to perform an evolved form of pingbacks. If X references one of Y’s posts, X’s server notifies Y’s server during publication through a webmention service and, Y can then decide to add reference counts of copies of the referring link to their page. Clever.
Tag: Webmention
webmention
Let’s see if this works!?
I’ve added the following link to this site’s baseof.html
so that it is now rendered for each page:
<link
href="https://us-central1-webmention.cloudfunctions.net/webmention"
rel="webmention" />
I discovered indieweb yesterday reading about webmention. Who knows what got me to webmention!?
The principles of both are sound. Instead of relying on come-go behemoths to run our digital world, indieweb seeks to provide technologies that enable us to achieve things without them. webmention is a cross-walled-garden mechanism to perform an evolved form of pingbacks. If X references one of Y’s posts, X’s server notifies Y’s server during publication through a webmention service and, Y can then decide to add reference counts of copies of the referring link to their page. Clever.
Tag: Pest
pest: parsing in Rust
A Microsoft engineer introduced me to pest
as a way to introduce service filtering in a ZeroConf plugin that I’m prototyping for Akri. It’s been fun to learn but I worry that, because I won’t use it frequently, I’m going to quickly forget what I’ve done. So, here are my notes.
Here’s the problem, I’d like to be able to provide users of the ZeroConf plugin with a string-based filter that permits them to filter the services discovered when the Akri agent browses a network.
Tag: Zeroconf
pest: parsing in Rust
A Microsoft engineer introduced me to pest
as a way to introduce service filtering in a ZeroConf plugin that I’m prototyping for Akri. It’s been fun to learn but I worry that, because I won’t use it frequently, I’m going to quickly forget what I’ve done. So, here are my notes.
Here’s the problem, I’d like to be able to provide users of the ZeroConf plugin with a string-based filter that permits them to filter the services discovered when the Akri agent browses a network.
Tag: Github-Actions
GitHub Actions && GitHub Container Registry
You know when you start something and then regret it!? I think I’ll be sticking with Google Cloud Build; GitHub Actions appears functional and useful but I found the documentation to be confusing and limited and struggled to get a simple container image build|push working.
I’ve long used Docker Hub but am planning to use it less as a result of the planned changes. I want to see Docker succeed and to do so it needs to find a way to make money but, there are free alternatives including the new GitHub Container Registry and the very very cheap Google Container Registry.
Tag: Microsoft
akri
I was very interested to read about Microsoft’s DeisLab’s latest (rust-based) Kubernetes project: akri. If I understand it correctly, it provides a mechanism to make any (IoT) device accessible to containers running within a cluster. I need to spend more time playing around with it so that I can fully understand it. I had some problems getting the End-to-End demo running on Google Compute Engine (and then I tried DigitalOcean droplet) instances. So, here’s a two-ways solution to get you going.
Tag: Rocket
Deploying a Rust HTTP server to DigitalOcean App Platform
DigitalOcean launched an App Platform with many Supported Languages and Frameworks. I used Golang first, then wondered how to use non-natively-supported languages, i.e. Rust.
The good news is that Docker is a supported framework and so, you can run pretty much anything.
Repo: https://github.com/DazWilkin/do-apps-rust
Rust
I’m a Rust noob. I’m always receptive to feedback on improvements to the code. I looked to mirror the Golang example. I’m using rocket and rocket-prometheus for the first time:
You will want to install rust nightly (as Rocket has a dependency that requires it) and then you can override the default toolchain for the current project using:
Tag: Cloud-Storage
Hugo and Google Cloud Storage
I’m using Hugo as a static site generator for this blog. I’m using Firebase (for free) to host lefsilver.
I have other domains that I want to promote and decided to use Google Cloud Storage buckets for these sites. Using Google Cloud Storage for Hosting a static website and using Hugo to deploy sites to Google Cloud Storage (GCS) are documented but, I didn’t find a location where this is combined into a single tutorial and I wanted to add an explanation for ensuring your sites are included in Google’s and Bing’s search indexes.
Tag: GCS
Hugo and Google Cloud Storage
I’m using Hugo as a static site generator for this blog. I’m using Firebase (for free) to host lefsilver.
I have other domains that I want to promote and decided to use Google Cloud Storage buckets for these sites. Using Google Cloud Storage for Hosting a static website and using Hugo to deploy sites to Google Cloud Storage (GCS) are documented but, I didn’t find a location where this is combined into a single tutorial and I wanted to add an explanation for ensuring your sites are included in Google’s and Bing’s search indexes.
Tag: Cloud Build
Rube Goldberg Cloud Build machine for solving Quadratic equations
Google Cloud Build is described by Google as a “CI/CD platform” but it’s fundamentally a service that permits a series of containers to be chained together in a pipeline, optionally leveraging shared data.
As a CI/CD platform, it can be used to lint, test, compile and build software but, if you were looking for a way to explain its basic awesomeness, you could… I don’t know… build a Rube Goldberg style machine that solves Quadratic equations using it 😄
Cloud Build wishlist: Mountable Golang Modules Proxy
I think it would be valuable if Google were to provide volumes in Cloud Build of package registries (e.g. Go Modules; PyPi; Maven; NPM etc.).
Google provides a mirror of a subset of Docker Hub. This confers several benefits: Google’s imprimatur; speed (latency); bandwidth; and convenience.
The same benefits would apply to package registries.
In the meantime, there’s a hacky way to gain some of the benefits of these when using Cloud Build.
In the following example, I’ll show an approach using Golang Modules and Google’s module proxy aka proxy.golang.org
.
Tag: Cloud Shell
Visual Studio Code plus Google Cloud Shell
Update: 2020-09-24
Three updates since I wrote the post:
gcloud alpha cloud-shell get-mount-command ${DIR}
It’s possible to use sshfs
to mount the Cloud Shell home directory locally:
DIR=/path/to/dir
gcloud alpha cloud-shell get-mount-command ${DIR}
Which generates something of the form:
sshfs [[USERNAME]]@[[HOST]]: ${DIR} \
-p [[PORT]] \
-oIdentityFile=~/.ssh/google_compute_engine \
-oStrictHostKeyChecking=no
You may then code --new-window ${DIR}
curl command may lack .sshHost
curl’ing the cloudshell.googleapis.com
endpoint will result in a null value for .sshHost
if the Cloud Shell VM must be recreated. The gcloud
command avoids this by using an operation to poll the endpoint until the VM exists. The hacky alternative is to run gcloud alpha cloud-shell ssh
to force the VM to be created before running the curl command.
Tag: Visual Studio Code
Visual Studio Code plus Google Cloud Shell
Update: 2020-09-24
Three updates since I wrote the post:
gcloud alpha cloud-shell get-mount-command ${DIR}
It’s possible to use sshfs
to mount the Cloud Shell home directory locally:
DIR=/path/to/dir
gcloud alpha cloud-shell get-mount-command ${DIR}
Which generates something of the form:
sshfs [[USERNAME]]@[[HOST]]: ${DIR} \
-p [[PORT]] \
-oIdentityFile=~/.ssh/google_compute_engine \
-oStrictHostKeyChecking=no
You may then code --new-window ${DIR}
curl command may lack .sshHost
curl’ing the cloudshell.googleapis.com
endpoint will result in a null value for .sshHost
if the Cloud Shell VM must be recreated. The gcloud
command avoids this by using an operation to poll the endpoint until the VM exists. The hacky alternative is to run gcloud alpha cloud-shell ssh
to force the VM to be created before running the curl command.
Tag: Actions
Actions SDK Conversational Quickstart
Google’s tutorial didn’t work for me.
In this post, I’ll help you get this working.
https://developers.google.com/assistant/conversational/quickstart
Create and set up a project
This mostly works.
I recommend using the Actions Console as described to create the project.
I chose “Custom” and “Blank Project”
You need not enable Actions API as this is done automatically:
For the console work, I’m going to use Google’s excellent Cloud Shell. You may access this through the browser or through a terminal:
Tag: Actions-Sdk
Actions SDK Conversational Quickstart
Google’s tutorial didn’t work for me.
In this post, I’ll help you get this working.
https://developers.google.com/assistant/conversational/quickstart
Create and set up a project
This mostly works.
I recommend using the Actions Console as described to create the project.
I chose “Custom” and “Blank Project”
You need not enable Actions API as this is done automatically:
For the console work, I’m going to use Google’s excellent Cloud Shell. You may access this through the browser or through a terminal:
Tag: Assistant
Actions SDK Conversational Quickstart
Google’s tutorial didn’t work for me.
In this post, I’ll help you get this working.
https://developers.google.com/assistant/conversational/quickstart
Create and set up a project
This mostly works.
I recommend using the Actions Console as described to create the project.
I chose “Custom” and “Blank Project”
You need not enable Actions API as this is done automatically:
For the console work, I’m going to use Google’s excellent Cloud Shell. You may access this through the browser or through a terminal:
Tag: Trillian
Trillian Map Mode
Chatting with one of Google’s Trillian team, I was encouraged to explore Trillian’s Map Mode. The following is the result of some spelunking through this unfamiliar cave. I can’t provide any guarantee that this usage is correct or sufficient.
Here’s the repo: https://github.com/DazWilkin/go-trillian-map
I’ve written about Trillian Log Mode elsewhere.
I uncovered use of Trillian Map Mode through Trillian’s integration tests. I’m unclear on the distinction between TrillianMapClient
and TrillianMapWriteClient
but the latter served most of my needs.
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
Rust Crate Transparency && Rust SDK for Google Trillian
I’m noodling the utility of a Transparency solution for Rust Crates. When developers push crates to Cargo, a bunch of metadata is associated with the crate. E.g. protobuf
. As with Golang Modules, Python packages on PyPi etc., there appears to be utility in making tamperproof recordings of these publications. Then, other developers may confirm that a crate pulled from cates.io is highly unlikely to have been changed.
On Linux, Cargo stores downloaded crates under ${HOME}/.crates/registry
. In the case of the latest version (2.12.0
) of protobuf
, on my machine, I have:
Google Trillian on Cloud Run
I’ve written previously (Google Trillian for Noobs) about Google’s interesting project Trillian and about some of the “personalities” (e.g. PyPi Transparency) that I’ve build using it.
Having gone slight cra-cra on Cloud Run and gRPC this week with Golang gRPC Cloud Run and gRPC, Cloud Run & Endpoints, I thought it’d be fun to deploy Trillian and a personality to Cloud Run.
It mostly (!) works :-)
At the end of the post, I’ve summarized creating a Cloud SQL instance to host the Trillian data(base).
PyPi Transparency Client (Rust)
I’ve finally being able to hack my way through to a working Rust gRPC client (for PyPi Transparency).
It’s not very good: poorly structured, hacky etc. but it serves the purpose of giving me a foothold into Rust development so that I can evolve it as I learn the language and its practices.
There are several Rust crates (SDK) for gRPC. There’s no sanctioned SDK for Rust on grpc.io.
I chose stepancheg’s grpc-rust because it’s a pure Rust implementation (not built atop the C implementation).
PyPi Transparency
I’ve been noodling around with another Trillian personality.
Another in a theme that interests me in providing tamperproof logs for the packages in the popular package management registries.
The Golang team recently announced Go Module Mirror which is built atop Trillian. It seems to me that all the package registries (Go Modules, npm, Maven, NuGet etc.) would benefit from tamperproof logs hosted by a trusted 3rd-party.
As you may have guessed, PyPi Transparency is a log for PyPi packages. PyPi is comprehensive, definitive and trusted but, as with Go Module Mirror, it doesn’t hurt to provide a backup of some of its data. In the case of this solution, Trillian hosts a log of self-calculated SHA-256 hashes for Python packages that are added to it.
pypi-transparency
The goal of pypi-transparency is very similar to the underlying motivation for the Golang team’s Checksum Database (also built with Trillian).
Even though, PyPi provides hashes of the content of packages it hosts, the developer must trust that PyPi’s data is consistent. One ambition with pypi-transparency is to provide a companion, tamperproof log of PyPi package files in order to provide a double-check of these hashes.
It is important to understand what this does (and does not) provide. There’s no validation of a package’s content. The only calculation is that, on first observation, a SHA-256 hash is computed of the package’s content and the hash is recorded. If the package is subsequently altered, it’s very probable that the hash will change and this provides a signal to the user that the package’s contents has changed. Because pypi-transparency uses a tamperproof log, it’s very difficult to update the hash recorded in the tamperproof log, to reflect this change. Corrolary: pypi-transparency will record the hashes of packages that include malicious code.
Tag: Binaryen
Minimizing WASM binaries
I’ve spent time recently playing around with WebAssembly (WASM) and waPC. Rust and WASM were born at Mozilla and there’s a natural affinity with writing WASM binaries in Rust. In the WASM examples I’ve been using for WASM Transparency, waPC and MsgPack and waPC and Protobufs.
I’ve created 3 WASM binaries: complex.wasm
, simplex.wasm
and fabcar.wasm
and each is about 2.5MB when:
cargo build --target=wasm32-unknown-unknown --release
The Rust and WebAssembly book has an excellent section titled Shrinking .wasm.
Code Size. So, let’s see what help that provides.
Tag: Twiggy
Minimizing WASM binaries
I’ve spent time recently playing around with WebAssembly (WASM) and waPC. Rust and WASM were born at Mozilla and there’s a natural affinity with writing WASM binaries in Rust. In the WASM examples I’ve been using for WASM Transparency, waPC and MsgPack and waPC and Protobufs.
I’ve created 3 WASM binaries: complex.wasm
, simplex.wasm
and fabcar.wasm
and each is about 2.5MB when:
cargo build --target=wasm32-unknown-unknown --release
The Rust and WebAssembly book has an excellent section titled Shrinking .wasm.
Code Size. So, let’s see what help that provides.
Tag: Wasm
Minimizing WASM binaries
I’ve spent time recently playing around with WebAssembly (WASM) and waPC. Rust and WASM were born at Mozilla and there’s a natural affinity with writing WASM binaries in Rust. In the WASM examples I’ve been using for WASM Transparency, waPC and MsgPack and waPC and Protobufs.
I’ve created 3 WASM binaries: complex.wasm
, simplex.wasm
and fabcar.wasm
and each is about 2.5MB when:
cargo build --target=wasm32-unknown-unknown --release
The Rust and WebAssembly book has an excellent section titled Shrinking .wasm.
Code Size. So, let’s see what help that provides.
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Envoy WASM filters in Rust
A digression thanks to Sal Rashid who’s exploring WASM filters w/ Envoy.
The documentation is sparse but:
There is a Rust SDK but it’s not documented:
I found two useful posts by Rustaceans who were able to make use of it:
Here’s my simple use of the SDK’s examples.
wasme
curl -sL https://run.solo.io/wasme/install | sh
PATH=${PATH}:${HOME}/.wasme/bin
wasme --version
It may be possible to avoid creating an account on WebAssemblyHub if you’re staying local.
Remotely invoking WASM functions using gRPC and waPC
Following on from waPC & Protobufs, I can now remotely invoke (arbitrary) WASM functions:
Client:
The logging isn’t perfectly clear but, the client gets (a previously added) WASM binary from the server (using the SHA-256 of the WASM binary as a unique identifier). The result includes metadata that includes a protobuf descriptor of the WASM binary’s functions. The descriptor defines gRPC services (that represent the WASM functions) with input (parameters) and output (results) messages.
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
waPC & Protobufs
I’m hacking around with a solution that combines WASM and Google Trillian.
Ultimately, I’d like to be able to ship WASM (binaries) to a Trillian personality and then invoke (exported) functions on them. Some this was borne from the interesting exploration of Krustlet and its application of wascc.
I’m still booting into WASM but it’s a very interesting technology that has most interesting potential outside the browser. Some folks have been trailblazing the technology and I have been reading Kevin Hoffman’s medium and wascc (nee waxosuit) work. From this, I stumbled upon Kevin’s waPC and I’m using waPC in this prototyping as a way to exchange data between clients and servers running WASM binaries.
Google Container Registry w/ OCI
I’ve been spending some time this week with Krustlet.
I’m working on documenting how to run Krustlet(s) alongside GKE. I’ve been running a Krustlet with MicroK8s.
The Krustlet demos reference WASM assemblines stored in Azure Container Registry as OCI containers. Google Container Registry supports OCI format and so I tried (successfully) using GCR instead of AZR.
There may be an easier approach but this is how I got this working.
Krustlet uses wasm-to-oci
. I was challenged by wasm-to-oci
authentication. wasm-to-oci
uses ORAS
. It turns out that, after authenticating using ORAS, I’m able to use wasm-to-oci
to authenticate to a GCR registry!
Tag: Gprc
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
Tag: Wapc
WASM Transparency
I’ve been playing around with a proof-of-concept combining WASM and Trillian. The hypothesis was to explore using WASM as a form of chaincode with Trillian. The project works but it’s far from being a chaincode-like solution.
Let’s start with a couple of (trivial) examples and then I’ll explain what’s going on and how it’s implemented.
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Method: mul
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Message
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [random:New] Float32
2020/08/14 18:42:17 [Client:Invoke] Metadata: complex.wasm
2020/08/14 18:42:17 [main:loop:dynamic-invoke] Success: result:{real:0.036980484 imag:0.3898267}
After shipping a Rust-sourced WASM solution (complex.wasm
) to the WASM transparency server, the client invokes a method mul
that’s exposed by it using a dynamically generated request message and outputs the response. Woo hoo! Yes, an expensive way to multiple complex numbers.
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Remotely invoking WASM functions using gRPC and waPC
Following on from waPC & Protobufs, I can now remotely invoke (arbitrary) WASM functions:
Client:
The logging isn’t perfectly clear but, the client gets (a previously added) WASM binary from the server (using the SHA-256 of the WASM binary as a unique identifier). The result includes metadata that includes a protobuf descriptor of the WASM binary’s functions. The descriptor defines gRPC services (that represent the WASM functions) with input (parameters) and output (results) messages.
WASM Cloud Functions
Following on from waPC & Protobufs and a question on Stack Overflow about Cloud Functions, I was compelled to try running WASM on Cloud Functions no JavaScript.
I wanted to reuse the WASM waPC functions that I’d written in Rust as described in the other post. Cloud Functions does not (yet!?) provide a Rust runtime and so I’m using the waPC Host for Go in this example.
It works!
PARAMS=$(printf '{"a":{"real":39,"imag":3},"b":{"real":39,"imag":3}}' | base64)
TOKEN=$(gcloud auth print-identity-token)
echo "{
\"filename\":\"complex.wasm\",
\"function\":\"c:mul\",
\"params\":\"${PARAMS}\"
}" |\
curl \
--silent \
--request POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
--data @- \
https://${REGION}-${PROJECT}.cloudfunctions.net/invoker
yields (correctly):
waPC & Protobufs
I’m hacking around with a solution that combines WASM and Google Trillian.
Ultimately, I’d like to be able to ship WASM (binaries) to a Trillian personality and then invoke (exported) functions on them. Some this was borne from the interesting exploration of Krustlet and its application of wascc.
I’m still booting into WASM but it’s a very interesting technology that has most interesting potential outside the browser. Some folks have been trailblazing the technology and I have been reading Kevin Hoffman’s medium and wascc (nee waxosuit) work. From this, I stumbled upon Kevin’s waPC and I’m using waPC in this prototyping as a way to exchange data between clients and servers running WASM binaries.
Tag: Messagepack
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Tag: Msgpack
waPC and MsgPack (Rust|Golang)
As my reader will know (Hey Mom!), I’ve been noodling around with WASM and waPC. I’ve been exploring ways to pass structured messages across the host:guest boundary.
Protobufs was my first choice. @KevinHoffman created waPC and waSCC and he explained to me and that wSCC uses Message Pack.
It’s slightly surprising to me (still) that technologies like this exist with everyone else seemingly using them and I’ve not heard of them. I don’t expect to know everything but I’m surprised I’ve not stumbled upon msgpack until now.
Tag: Envoy
Envoy WASM filters in Rust
A digression thanks to Sal Rashid who’s exploring WASM filters w/ Envoy.
The documentation is sparse but:
There is a Rust SDK but it’s not documented:
I found two useful posts by Rustaceans who were able to make use of it:
Here’s my simple use of the SDK’s examples.
wasme
curl -sL https://run.solo.io/wasme/install | sh
PATH=${PATH}:${HOME}/.wasme/bin
wasme --version
It may be possible to avoid creating an account on WebAssemblyHub if you’re staying local.
Tag: Wascc
waPC & Protobufs
I’m hacking around with a solution that combines WASM and Google Trillian.
Ultimately, I’d like to be able to ship WASM (binaries) to a Trillian personality and then invoke (exported) functions on them. Some this was borne from the interesting exploration of Krustlet and its application of wascc.
I’m still booting into WASM but it’s a very interesting technology that has most interesting potential outside the browser. Some folks have been trailblazing the technology and I have been reading Kevin Hoffman’s medium and wascc (nee waxosuit) work. From this, I stumbled upon Kevin’s waPC and I’m using waPC in this prototyping as a way to exchange data between clients and servers running WASM binaries.
Tag: GCR
Google Container Registry w/ OCI
I’ve been spending some time this week with Krustlet.
I’m working on documenting how to run Krustlet(s) alongside GKE. I’ve been running a Krustlet with MicroK8s.
The Krustlet demos reference WASM assemblines stored in Azure Container Registry as OCI containers. Google Container Registry supports OCI format and so I tried (successfully) using GCR instead of AZR.
There may be an easier approach but this is how I got this working.
Krustlet uses wasm-to-oci
. I was challenged by wasm-to-oci
authentication. wasm-to-oci
uses ORAS
. It turns out that, after authenticating using ORAS, I’m able to use wasm-to-oci
to authenticate to a GCR registry!
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]]
PROJECT=[[YOUR-PROJECT]]
SERVER="us.gcr.io"
If not already:
gcloud projects create {$PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable containerregistry.googleapis.com \
--project=${PROJECT}
Container Registry
IMAGE="busybox" # Or ...
docker pull ${IMAGE}
docker tag \
${IMAGE} \
${SERVER}/${PROJECT}/${IMAGE}
docker push ${SERVER}/${PROJECT}/${IMAGE}
gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE}
Service Account
Create a service account that’s permitted to download (read-only) images from this project’s registry
Tag: Oci
Google Container Registry w/ OCI
I’ve been spending some time this week with Krustlet.
I’m working on documenting how to run Krustlet(s) alongside GKE. I’ve been running a Krustlet with MicroK8s.
The Krustlet demos reference WASM assemblines stored in Azure Container Registry as OCI containers. Google Container Registry supports OCI format and so I tried (successfully) using GCR instead of AZR.
There may be an easier approach but this is how I got this working.
Krustlet uses wasm-to-oci
. I was challenged by wasm-to-oci
authentication. wasm-to-oci
uses ORAS
. It turns out that, after authenticating using ORAS, I’m able to use wasm-to-oci
to authenticate to a GCR registry!
Tag: Crate
Rust implementation of Crate Transparency using Google Trillian
I’ve been hacking on a Rust-based transparent application for Google Trillian. As appears to be my fixation, this personality is for another package manager. This time, Rust’s Crates often found in crates.io
which is Rust’s Package Registry. I discussed this project earlier this month Rust Crate Transparency && Rust SDK for Google Trillian and and earlier approach for Python’s packages with pypi-transparency.
This time, of course, I’m using Rust. And, by way of a first for me, for the gRPC server implementation (aka “personality”). I’ve been lazy thanks to the excellent gRPCurl and have been using it way of a client. Because I’m more familiar with Golang and because I’ve written (most) other Trillian personalities in Golang, I resorted to quickly implementing Crate Transparency in Golang too in order to uncover bugs with the Rust implementation. I’ll write a follow-up post on the complexity I seem to struggle with when using protobufs and gRPC [in Golang].
Tag: BLE
Golang Xiaomi Bluetooth Temperature|Humidity (LYWSD03MMC) 2nd Gen
Well, this became more of an adventure that I’d originally wanted but, after learning some BLE and, with the help of others (Thanks Jonatha, JsBergbau), I’ve sample code that connects to 4 Xiaomi 2nd gen. Thermometers, subscribes to readings and publishes the data to MQTT. From there, I’m scraping it using Inuits MQTTGateway into Prometheus.
Repo: https://github.com/DazWilkin/gomijia2
Thanks|Credit:
- Jonathan McDowell for gomijia and help
- JsBergbau for help
Background
I’ve been playing around with ESPHome and blogged around my very positive experience ESPHome, MQTT, Prometheus and almost Cloud IoT. I’ve ordered a couple of ESP32-DevKitC and hope this will enable me to connect to Google Cloud IoT.
Tag: LYWSD03MMC
Golang Xiaomi Bluetooth Temperature|Humidity (LYWSD03MMC) 2nd Gen
Well, this became more of an adventure that I’d originally wanted but, after learning some BLE and, with the help of others (Thanks Jonatha, JsBergbau), I’ve sample code that connects to 4 Xiaomi 2nd gen. Thermometers, subscribes to readings and publishes the data to MQTT. From there, I’m scraping it using Inuits MQTTGateway into Prometheus.
Repo: https://github.com/DazWilkin/gomijia2
Thanks|Credit:
- Jonathan McDowell for gomijia and help
- JsBergbau for help
Background
I’ve been playing around with ESPHome and blogged around my very positive experience ESPHome, MQTT, Prometheus and almost Cloud IoT. I’ve ordered a couple of ESP32-DevKitC and hope this will enable me to connect to Google Cloud IoT.
Tag: Xiaomi
Golang Xiaomi Bluetooth Temperature|Humidity (LYWSD03MMC) 2nd Gen
Well, this became more of an adventure that I’d originally wanted but, after learning some BLE and, with the help of others (Thanks Jonatha, JsBergbau), I’ve sample code that connects to 4 Xiaomi 2nd gen. Thermometers, subscribes to readings and publishes the data to MQTT. From there, I’m scraping it using Inuits MQTTGateway into Prometheus.
Repo: https://github.com/DazWilkin/gomijia2
Thanks|Credit:
- Jonathan McDowell for gomijia and help
- JsBergbau for help
Background
I’ve been playing around with ESPHome and blogged around my very positive experience ESPHome, MQTT, Prometheus and almost Cloud IoT. I’ve ordered a couple of ESP32-DevKitC and hope this will enable me to connect to Google Cloud IoT.
Tag: ESP
gRPC, Cloud Run & Endpoints
<3 Google but there’s quite often an assumption that we’re all sitting around the engineering table and, of course, we’re not.
Cloud Endpoints is a powerful offering but – IMO – it’s super confusing to understand and complex to deploy.
If you’re familiar with the motivations behind service meshes (e.g. Istio), Cloud Endpoints fits in a similar niche (“neesh” or “nitch”?). The underlying ambition is that, developers can take existing code and by adding a proxy (or sidecar), general-purpose abstractions, security, logging etc. may be added.
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: OriginStamp
OriginStamp Rust SDK Example
I wrote recently describing Python and Golang clients for OriginStamp based on OriginStamp’s API’s swagger spec. As a way to pursue learning rust, I’ve been forcing myself to write examples using rust. I’m honestly finding learning rust tough going and think I’m probably better to revert to the “Learning Rust” tutorials.
That said, herewith an explanation of building a rust client using an OpenAPI (!) generated SDK from OriginStamp’s swagger spec.
OriginStamp: Verifying Proofs
Recently, I wrote about some initial adventures with OriginStamp
Using OriginStamp’s UI or API, submitting a hash results in transactions being submitted to Bitcoin, Ethereum and a German newspaper.
Using the API, it’s possible to query OriginStamp’s service for a proof. This post explains how to verify such a proof.
The diligent reader among you (Hey Mom!) will recall that I submitted a hash for the message:
Frederik Jack is a bubbly Border Collie
The SHA-256 hash of this message is:
FreeTSA & Digitorus' Timestamp SDK
I wrote recently about some exploration of Timestamping with OriginStamp. Since writing that post, I had some supportive feedback from the helpful folks at OriginStamp and plan to continue exploring that solution.
Meanwhile, OriginStamp exposed me to timestamping and trusted timestamping and I discovered freeTSA.org.
What’s the point? These services provide authoritative proof of the existence of a digital asset before some point in time; OriginStamp provides a richer service and uses multiple timestamp authorities including Bitcoin, Ethereum and rather interestingly a German Newspaper’s Trusted Timestamp.
OriginStamp Python|Golang SDK Examples
A friend mentioned OriginStamp to me.
NB There are 2 sites: originstamp.com and originstamp.org.
It’s an interesting project.
It’s a solution for providing auditable proof that you had a(ccess to) some digital thing before a certain date. OriginStamp provides user-|developer-friendly means to submit files|hashes (of your content) and have these bundled into transactions that are submitted to e.g. bitcoin.
I won’t attempt to duplicate the narrative here, review OriginStamp’s site and some of its content.
Tag: Rust"
OriginStamp Rust SDK Example
I wrote recently describing Python and Golang clients for OriginStamp based on OriginStamp’s API’s swagger spec. As a way to pursue learning rust, I’ve been forcing myself to write examples using rust. I’m honestly finding learning rust tough going and think I’m probably better to revert to the “Learning Rust” tutorials.
That said, herewith an explanation of building a rust client using an OpenAPI (!) generated SDK from OriginStamp’s swagger spec.
Tag: Cloud IoT
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: Esphome
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: IoT
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: Mosquitto
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: Mqttgateway
ESPHome, MQTT, Prometheus and almost Cloud IoT
ESPHome is a very interesting project. I’d not heard of it until this week and am surprised that it isn’t more newsworthy.
I’m always tinkering with IoT stuff, have a couple of Wemos D1 ESP8266s. They are brought out occasionally for learning. I’ve been using them this week with ESPHome. I’m looking to buy some Xiaomi BLE temperature sensors and thinking I could read the temperatures from these using the ESPs (thanks to ESPHome) and publish the data to MQTT. I tried (unsuccessfully) to publish to Google Cloud IoT (not Google’s fault but a limitation in the current ESPHome, I think) but have been successful publishing to a Mosquitto broker and rendering the metric data in Prometheus.
Tag: Bitcoin
OriginStamp: Verifying Proofs
Recently, I wrote about some initial adventures with OriginStamp
Using OriginStamp’s UI or API, submitting a hash results in transactions being submitted to Bitcoin, Ethereum and a German newspaper.
Using the API, it’s possible to query OriginStamp’s service for a proof. This post explains how to verify such a proof.
The diligent reader among you (Hey Mom!) will recall that I submitted a hash for the message:
Frederik Jack is a bubbly Border Collie
The SHA-256 hash of this message is:
Tag: Merkle
OriginStamp: Verifying Proofs
Recently, I wrote about some initial adventures with OriginStamp
Using OriginStamp’s UI or API, submitting a hash results in transactions being submitted to Bitcoin, Ethereum and a German newspaper.
Using the API, it’s possible to query OriginStamp’s service for a proof. This post explains how to verify such a proof.
The diligent reader among you (Hey Mom!) will recall that I submitted a hash for the message:
Frederik Jack is a bubbly Border Collie
The SHA-256 hash of this message is:
Tag: Timestamping
OriginStamp: Verifying Proofs
Recently, I wrote about some initial adventures with OriginStamp
Using OriginStamp’s UI or API, submitting a hash results in transactions being submitted to Bitcoin, Ethereum and a German newspaper.
Using the API, it’s possible to query OriginStamp’s service for a proof. This post explains how to verify such a proof.
The diligent reader among you (Hey Mom!) will recall that I submitted a hash for the message:
Frederik Jack is a bubbly Border Collie
The SHA-256 hash of this message is:
FreeTSA & Digitorus' Timestamp SDK
I wrote recently about some exploration of Timestamping with OriginStamp. Since writing that post, I had some supportive feedback from the helpful folks at OriginStamp and plan to continue exploring that solution.
Meanwhile, OriginStamp exposed me to timestamping and trusted timestamping and I discovered freeTSA.org.
What’s the point? These services provide authoritative proof of the existence of a digital asset before some point in time; OriginStamp provides a richer service and uses multiple timestamp authorities including Bitcoin, Ethereum and rather interestingly a German Newspaper’s Trusted Timestamp.
Tag: TSA
FreeTSA & Digitorus' Timestamp SDK
I wrote recently about some exploration of Timestamping with OriginStamp. Since writing that post, I had some supportive feedback from the helpful folks at OriginStamp and plan to continue exploring that solution.
Meanwhile, OriginStamp exposed me to timestamping and trusted timestamping and I discovered freeTSA.org.
What’s the point? These services provide authoritative proof of the existence of a digital asset before some point in time; OriginStamp provides a richer service and uses multiple timestamp authorities including Bitcoin, Ethereum and rather interestingly a German Newspaper’s Trusted Timestamp.
Tag: Container Registry
Accessing GCR repos from Kubernetes
Until today, I’d not accessed a Google Container Registry repo from a non-GKE Kubernetes deployment.
It turns out that it’s pretty well-documented (link) but, here’s an end-end example.
Assuming:
BILLING=[[YOUR-BILLING]]
PROJECT=[[YOUR-PROJECT]]
SERVER="us.gcr.io"
If not already:
gcloud projects create {$PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable containerregistry.googleapis.com \
--project=${PROJECT}
Container Registry
IMAGE="busybox" # Or ...
docker pull ${IMAGE}
docker tag \
${IMAGE} \
${SERVER}/${PROJECT}/${IMAGE}
docker push ${SERVER}/${PROJECT}/${IMAGE}
gcloud container images list-tags ${SERVER}/${PROJECT}/${IMAGE}
Service Account
Create a service account that’s permitted to download (read-only) images from this project’s registry
Tag: Proxy.golang.org
Cloud Build wishlist: Mountable Golang Modules Proxy
I think it would be valuable if Google were to provide volumes in Cloud Build of package registries (e.g. Go Modules; PyPi; Maven; NPM etc.).
Google provides a mirror of a subset of Docker Hub. This confers several benefits: Google’s imprimatur; speed (latency); bandwidth; and convenience.
The same benefits would apply to package registries.
In the meantime, there’s a hacky way to gain some of the benefits of these when using Cloud Build.
In the following example, I’ll show an approach using Golang Modules and Google’s module proxy aka proxy.golang.org
.
Tag: GCE
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Tag: Inlets
Setting up a GCE Instance as an Inlets Exit Node
The prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
Tag: DD-WRT
Trendnet TEW-812DRU and DD-WRT
The FBI Portland published an interesting advisory with several, sensible recommendations including firewalling IoT devices from other devices on a home network. I decided to deploy a redundant Trendnet TEW-812DRU version 2.0 for this purpose.
Caveat Developer: Before I go further, I don’t recommend installing DD-WRT on a Trendnet TEW-812DRU unless you’re willing to brick the device irrecoverably.
I read the DD-WRT instructions several times (“peacock thread”,router database – do not use v3.0 beta builds!), thought I’d followed them and still (temporarily) bricked my router using the firmware downloaded from the DD-WRT router database. It is thanks to Justin’s help and his post that I was encouraged to try restoring my device to the Trendent firmware and then retring the DD-WRT installation with an older firmware version.
Tag: TEW-812DRU
Trendnet TEW-812DRU and DD-WRT
The FBI Portland published an interesting advisory with several, sensible recommendations including firewalling IoT devices from other devices on a home network. I decided to deploy a redundant Trendnet TEW-812DRU version 2.0 for this purpose.
Caveat Developer: Before I go further, I don’t recommend installing DD-WRT on a Trendnet TEW-812DRU unless you’re willing to brick the device irrecoverably.
I read the DD-WRT instructions several times (“peacock thread”,router database – do not use v3.0 beta builds!), thought I’d followed them and still (temporarily) bricked my router using the firmware downloaded from the DD-WRT router database. It is thanks to Justin’s help and his post that I was encouraged to try restoring my device to the Trendent firmware and then retring the DD-WRT installation with an older firmware version.
Tag: Trendnet
Trendnet TEW-812DRU and DD-WRT
The FBI Portland published an interesting advisory with several, sensible recommendations including firewalling IoT devices from other devices on a home network. I decided to deploy a redundant Trendnet TEW-812DRU version 2.0 for this purpose.
Caveat Developer: Before I go further, I don’t recommend installing DD-WRT on a Trendnet TEW-812DRU unless you’re willing to brick the device irrecoverably.
I read the DD-WRT instructions several times (“peacock thread”,router database – do not use v3.0 beta builds!), thought I’d followed them and still (temporarily) bricked my router using the firmware downloaded from the DD-WRT router database. It is thanks to Justin’s help and his post that I was encouraged to try restoring my device to the Trendent firmware and then retring the DD-WRT installation with an older firmware version.
Tag: Google Fit
Google Fit
I’ve spent a few days exploring [Google Fit SDK] as I try to wean myself from my obsession with metrics (of all forms). A quick Googling got me to Robert’s Exporter Google Fit Daily Steps, Weight and Distance to a Google Sheet. This works and is probably where I should have stopped… avoiding the rabbit hole that I’ve been down…
I threw together a simple Golang implementation of the SDK using Google’s Golang API Client Library. Thanks to Robert’s example, I was able to infer some of the complexity this API particularly in its use of data types, data sources and datasets. Having used Stackdriver in my previous life, Google Fit’s structure bears more than a passing resemblance to Stackdriver’s data model and its use of resource types and metric types.
Tag: Google Home
Google Home Exporter
I’m obsessing over Prometheus exporters. First came Linode Exporter, then GCP Exporter and, on Sunday, I stumbled upon a reverse-engineered API for Google Home devices and so wrote a very basic Google Home SDK and a similarly basic Google Home Exporter:
The SDK only implements /setup/eureka_info
and then only some of the returned properties. There’s not a lot of metric-like data to use besides SignalLevel
(signal_level
) and NoiseLevel
(noise_level
). I’m not clear on the meaning of some of the properties.
Tag: BNF
Adventures around BPF
I think (!?) this interesting learning experience started with Envoy Go Extensions.
IIUC, Cilium contributed this mechanism (Envoy Go Extensions) by which it’s possible to extend Envoy using Golang. The documentation references BPF. Cilium and BNF were both unfamiliar technologies to me and so began my learning. This part of the journey concludes with Weave Scope.
I learned that Cilium is a CNI implementation that may be used with Kubernetes. GKE defaults (but is not limited to) to Google’s own CNI implementation (link). From a subsequent brief exploration on CNI, I believe Digital Ocean’s managed Kubernetes service uses Cillium and Linode Kubernetes Engine uses Calico.
Tag: Cilium
Adventures around BPF
I think (!?) this interesting learning experience started with Envoy Go Extensions.
IIUC, Cilium contributed this mechanism (Envoy Go Extensions) by which it’s possible to extend Envoy using Golang. The documentation references BPF. Cilium and BNF were both unfamiliar technologies to me and so began my learning. This part of the journey concludes with Weave Scope.
I learned that Cilium is a CNI implementation that may be used with Kubernetes. GKE defaults (but is not limited to) to Google’s own CNI implementation (link). From a subsequent brief exploration on CNI, I believe Digital Ocean’s managed Kubernetes service uses Cillium and Linode Kubernetes Engine uses Calico.
Tag: Weave Scope
Adventures around BPF
I think (!?) this interesting learning experience started with Envoy Go Extensions.
IIUC, Cilium contributed this mechanism (Envoy Go Extensions) by which it’s possible to extend Envoy using Golang. The documentation references BPF. Cilium and BNF were both unfamiliar technologies to me and so began my learning. This part of the journey concludes with Weave Scope.
I learned that Cilium is a CNI implementation that may be used with Kubernetes. GKE defaults (but is not limited to) to Google’s own CNI implementation (link). From a subsequent brief exploration on CNI, I believe Digital Ocean’s managed Kubernetes service uses Cillium and Linode Kubernetes Engine uses Calico.
Tag: Weaveworks
Adventures around BPF
I think (!?) this interesting learning experience started with Envoy Go Extensions.
IIUC, Cilium contributed this mechanism (Envoy Go Extensions) by which it’s possible to extend Envoy using Golang. The documentation references BPF. Cilium and BNF were both unfamiliar technologies to me and so began my learning. This part of the journey concludes with Weave Scope.
I learned that Cilium is a CNI implementation that may be used with Kubernetes. GKE defaults (but is not limited to) to Google’s own CNI implementation (link). From a subsequent brief exploration on CNI, I believe Digital Ocean’s managed Kubernetes service uses Cillium and Linode Kubernetes Engine uses Calico.
Tag: Kubernetes Engine
Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
Tag: Ingress
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort
and (TCP) LoadBalancer
options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used. But, for the other services, NGINX Ingress is a good option and so I’ve been exploring NGINX Ingress (Controller) link. This Ingress appears to work except that I’ve been unable to get it to work with mutual TLS between the (NGINX) proxy and my backend services.
Tag: NGINX
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort
and (TCP) LoadBalancer
options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used. But, for the other services, NGINX Ingress is a good option and so I’ve been exploring NGINX Ingress (Controller) link. This Ingress appears to work except that I’ve been unable to get it to work with mutual TLS between the (NGINX) proxy and my backend services.
Tag: Cadvisor
Run cAdvisor when using Docker Compose
cAdvisor
has long been a favorite monitoring tool of mine. I’m using Docker Compose for local testing and have begun including a cAdvisor
in my docker-compose.yaml
files.
cadvisor:
restart: always
image: google/cadvisor:${CADVISOR_VERSION}
container_name: cadvisor
# command:
# - --prometheus_endpoint="/metrics" # Default
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/snap/docker/current:/var/lib/docker:ro" #- "/var/lib/docker/:/var/lib/docker:ro"
expose:
- "8080"
ports:
- 8080:8080
I’d not realized until recently, that cAdvisor
also surfaces a Prometheus metrics endpoint and so, if you do follow this path and you’re also using Prometheus, don’t forget to add cAdvisor
to your Prometheus targets.
Tag: Docker-Compose
Run cAdvisor when using Docker Compose
cAdvisor
has long been a favorite monitoring tool of mine. I’m using Docker Compose for local testing and have begun including a cAdvisor
in my docker-compose.yaml
files.
cadvisor:
restart: always
image: google/cadvisor:${CADVISOR_VERSION}
container_name: cadvisor
# command:
# - --prometheus_endpoint="/metrics" # Default
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/snap/docker/current:/var/lib/docker:ro" #- "/var/lib/docker/:/var/lib/docker:ro"
expose:
- "8080"
ports:
- 8080:8080
I’d not realized until recently, that cAdvisor
also surfaces a Prometheus metrics endpoint and so, if you do follow this path and you’re also using Prometheus, don’t forget to add cAdvisor
to your Prometheus targets.
Tag: Vs-Code
Visual Studio Code: gopls and YAML
The Go team is developing a Language Server Protocol [LSP] implementation) called gopls
. Visual Studio Code (and others) support LSP. Other languages (e.g. Python have LSP implementations too). I’ve been using gopls
for some time. It works (mostly) very well and replaces multiple, indepedent tools with two (gopls
and delve
).
My Visual Studio Code settings that include gopls
is:
"go.autocompleteUnimportedPackages": true,
"go.useLanguageServer": true,
"[go]": {
"editor.snippetSuggestions": "none",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
},
"gopls": {
"usePlaceholders": true,
"wantCompletionDocumentation": true,
},
"go.toolsEnvVars": {
},
"go.languageServerFlags": [
"-rpc.trace",
"serve",
"--debug=localhost:6060",
],
"go.enableCodeLens": {
"references": true,
"runtest": true
},
One of the Google engineers working on gopls
gave a comprehensive and interesting overview of the tool at GopherCon 2019.
Tag: Pypi-Transparency
pypi-transparency
The goal of pypi-transparency is very similar to the underlying motivation for the Golang team’s Checksum Database (also built with Trillian).
Even though, PyPi provides hashes of the content of packages it hosts, the developer must trust that PyPi’s data is consistent. One ambition with pypi-transparency is to provide a companion, tamperproof log of PyPi package files in order to provide a double-check of these hashes.
It is important to understand what this does (and does not) provide. There’s no validation of a package’s content. The only calculation is that, on first observation, a SHA-256 hash is computed of the package’s content and the hash is recorded. If the package is subsequently altered, it’s very probable that the hash will change and this provides a signal to the user that the package’s contents has changed. Because pypi-transparency uses a tamperproof log, it’s very difficult to update the hash recorded in the tamperproof log, to reflect this change. Corrolary: pypi-transparency will record the hashes of packages that include malicious code.