Vultr CLI and JSON output
I’ve begun exploring Vultr after the company announced a managed Kubernetes offering Vultr Kubernetes Engine (VKE).
In my brief experience, it’s a decent platform and its CLI vultr-cli
is mostly (!) good. The CLI has a limitation in that command output is text formatted and this makes it challenging to parse the output when scripting.
NOTE The Vultr developers have a branch
rewrite
that includes a solution to this problem.
Example
ID 12345678-90ab-cdef-1234-567890abcdef
LABEL test
DATE CREATED 2022-01-01T00:00:00+00:00
CLUSTER SUBNET 255.255.255.255/16
SERVICE SUBNET 255.255.255.255/12
IP 255.255.255.255
ENDPOINT 12345678-90ab-cdef-1234-567890abcdef.vultr-k8s.com
VERSION v1.23.5+3
REGION mars
STATUS pending
NODE POOLS
ID 12345678-90ab-cdef-1234-567890abcdef
DATE CREATED 2022-01-01T00:00:00+00:00
DATE UPDATED 2022-01-01T00:00:00+00:00
LABEL nodepool
TAG foo
PLAN vc2-1c-2gb
STATUS pending
NODE QUANTITY 1
AUTO SCALER false
MIN NODES 1
MAX NODES 1
NODES
ID DATE CREATED LABEL STATUS
12345678-
Until that’s available, I’m lazy writing very simple bash scripts to parse vultr-cli
command output as JSON. The repo is vultr-cli-format
.
Automating HackMD documents
I was introduced to HackMD while working on an open-source project. It’s a collaborative editing tool for Markdown documents and there’s an API
I wanted to be able to programmatically edit one of my documents with a daily update. The API is easy-to-use and my only challenge was futzing with escape characters in bash strips representing the document Markdown content.
You’ll need an account with HackMD and an to Create API Token that I’ll refer to as TOKEN
.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Using Google Monitoring Alerting to send Pushover notifications
Table of Contents
Artifacts
- GitHub:
go-gcp-pushover-notificationchannel
- Image:
ghcr.io/dazwilkin/go-gcp-pushover-notificationchannel:220515
Pushover
Logging in to your Pushover account, you will be presented with a summary|dashboard page that includes Your User Key
. Copy the value of this key into a variable called PUSHOVER_USER
Create New Application|API Token
Pushover API has a Pushing Messages method. The documentation describes the format of the HTTP Request. It must be a POST
using TLS (https://
) to https://api.pushover.net/1/messages.json
. The content-type
should be application/json
. In the JSON body of the message, we must include token
(PUSHOVER_TOKEN
), user
(PUSHOVER_USER_KEY
), device
(we’ll use cloud-monitoring
) and a title
and a message
Cloud Run custom domain mappings
I have several Cloud Run services that I want to map to a domain.
During development, I create a Google Cloud Platform (GCP) project each day into which everything is deployed. This means that, every day, the Cloud Run services have newly non-inferable (to me) URLs. I thought this would be tedious to manage because:
- My DNS service isn’t programmable (I know!)
- Cloud Run services have non-inferable (by me) URLs
i.e. I thought I’d have to manually update the DNS entries each day.
Automating Scheduled Firestore Exports
For my “thing”, I use Firestore to persist state. I like Firestore a lot and, having been around Google for almost (!) a decade, I much prefer it to Datastore.
Firestore has a managed export|import service and I use this to backup Firestore collections|documents.
I’d been doing backups manually (using gcloud
) and decided today to take the plunge and use Cloud Scheduler for the first time. I’d been reluctant to do this until now because I’d assumed incorrectly that I’d need to write a wrapping service to invoke the export.
Playing with GitHub Container Registry REST API
I’ve a day to catch up on blogging. I’m building a “thing” and getting this near to the finish line consumes my time and has meant that I’m not originating anything particularly new. However, there are a couple of tricks in my deployment process that may be of interest to others.
I’ve been a long-term using of Google’s [Cloud Build] and like the simplicity (everything’s a container, alot!). Because I’m using GitHub repos, I’ve been using GitHub Actions to (re)build containers on pushes and GitHub Container registry (GHCR) to store the results. I know that Google provides analogs for GitHub repos and (forces me to use) Artifact Registry (to deploy my Cloud Run services) but even though I dislike GitHub Actions, it’s really easy to do everything in one place.
Using Google's Public Certificate Authority with Golang autocert
Last year, I wrote about using Automatic Certs w/ Golang gRPC service on Compute Engine. That solution uses ACME with (the wonderful) Let’s Encrypt. Google is offering a private preview of Automate Public Certificates Lifecycle Management via RFC 8555 (ACME) and, because I’m using Google Cloud Platform extensively to build a “thing” and I think it would be useful to have a backup to Let’s Encrypt, I thought I’d give the solution a try. You’ll need to sign-up for the private preview, for what follows to work.
Prometheus HTTP Service Discovery of Cloud Run services
Some time ago, I wrote about using Prometheus Service Discovery w/ Consul for Cloud Run and also Scraping metrics exposed by Google Cloud Run services that require authentication. Both solutions remain viable but they didn’t address another use case for Prometheus and Cloud Run services that I have with a “thing” that I’ve been building.
In this scenario, I want to:
- Configure Prometheus to scrape Cloud Run service metrics
- Discover Cloud Run services dynamically
- Authenticate to Cloud Run using Firebase Auth ID tokens
These requirements and – one other – present several challenges:
Automatic Certs w/ Golang gRPC service on Compute Engine
I needed to deploy a healthcheck-enabled gRPC TLS-enabled service. Fortunately, most (all?) of the SDKs include an implementation, e.g. Golang has grpc-go/health
.
I learned in my travels that:
- DigitalOcean [App] platform does not (link) work with TLS-based gRPC apps.
- Fly has a regression (link) that breaks gRPC
So, I resorted to Google Cloud Platform (GCP). Although Cloud Run would be well-suited to running the gRPC app, it uses a proxy|sidecar to provision a cert for the app and I wanted to be able to (easily use a custom domain) and give myself a somewhat general-purpose solution.