Google Cloud Platform (GCP) Exporter
Earlier this week I discussed a Linode Prometheus Exporter.
I added metrics for Digital Ocean’s Managed Kubernetes service to @metalmatze’s Digital Ocean Exporter.
This left, metrics for Google Cloud Platform (GCP) which has, for many years, been my primary cloud platform. So, today I wrote Prometheus Exporter for Google Cloud Platform.
All 3 of these exporters follow the template laid down by @metalmatze and, because each of these services has a well-written Golang SDK, it’s straightforward to implement an exporter for each of them.
Prometheus AlertManager
Yesterday I discussed a Linode Prometheus Exporter and tantalized use of Prometheus AlertManager.
Success:
Configure
The process is straightforward although I found the Prometheus (config) documentation slightly unwieldy to navigate :-(
The overall process is documented.
Here are the steps I took:
Configure Prometheus
Added the following to prometheus.yml
:
rule_files:
- "/etc/alertmanager/rules/linode.yml"
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager:9093"
Rules must be defined in separate rules files. See below for the content for linode.yml
and an explanation.
Linode Prometheus Exporter
I enjoy using Prometheus and have toyed around with it for some time particularly in combination with Kubernetes. I signed up with Linode [referral] compelled by the addition of a managed Kubernetes service called Linode Kubernetes Engine (LKE). I have an anxiety that I’ll inadvertently leave resources running (unused) on a cloud platform. Instead of refreshing the relevant billing page, it struck me that Prometheus may (not yet proven) help.
The hypothesis is that a combination of a cloud-specific Prometheus exporter reporting aggregate uses of e.g. Linodes (instances), NodeBalancers, Kubernetes clusters etc., could form the basis of an alert mechanism using Prometheus’ alerting.
NGINX Ingress
I’ve written a couple of deployment options (Google Compute Engine; Kubernetes) for an open-source project. The Kubernetes deployment provides NodePort
and (TCP) LoadBalancer
options and I’ve been trying (unsuccessfully) to add HTTPS Load-balancing.
I should (!) try to deploy to Google Kubernetes Engine (GKE) but I’ve been using microk8s, Digital Ocean Managed Kubernetes and the Linode LKE Beta. Each of these requires an implementation of Ingress controller. For GKE, GCP’s HTTP/S Load-balancer (GCLB) is used. But, for the other services, NGINX Ingress is a good option and so I’ve been exploring NGINX Ingress (Controller) link. This Ingress appears to work except that I’ve been unable to get it to work with mutual TLS between the (NGINX) proxy and my backend services.
PyPi Transparency Client (Rust)
I’ve finally being able to hack my way through to a working Rust gRPC client (for PyPi Transparency).
It’s not very good: poorly structured, hacky etc. but it serves the purpose of giving me a foothold into Rust development so that I can evolve it as I learn the language and its practices.
There are several Rust crates (SDK) for gRPC. There’s no sanctioned SDK for Rust on grpc.io.
I chose stepancheg’s grpc-rust because it’s a pure Rust implementation (not built atop the C implementation).
PyPi Transparency
I’ve been noodling around with another Trillian personality.
Another in a theme that interests me in providing tamperproof logs for the packages in the popular package management registries.
The Golang team recently announced Go Module Mirror which is built atop Trillian. It seems to me that all the package registries (Go Modules, npm, Maven, NuGet etc.) would benefit from tamperproof logs hosted by a trusted 3rd-party.
As you may have guessed, PyPi Transparency is a log for PyPi packages. PyPi is comprehensive, definitive and trusted but, as with Go Module Mirror, it doesn’t hurt to provide a backup of some of its data. In the case of this solution, Trillian hosts a log of self-calculated SHA-256 hashes for Python packages that are added to it.
Run cAdvisor when using Docker Compose
cAdvisor
has long been a favorite monitoring tool of mine. I’m using Docker Compose for local testing and have begun including a cAdvisor
in my docker-compose.yaml
files.
cadvisor:
restart: always
image: google/cadvisor:${CADVISOR_VERSION}
container_name: cadvisor
# command:
# - --prometheus_endpoint="/metrics" # Default
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/snap/docker/current:/var/lib/docker:ro" #- "/var/lib/docker/:/var/lib/docker:ro"
expose:
- "8080"
ports:
- 8080:8080
I’d not realized until recently, that cAdvisor
also surfaces a Prometheus metrics endpoint and so, if you do follow this path and you’re also using Prometheus, don’t forget to add cAdvisor
to your Prometheus targets.
Kubernetes Engine and Free Tier
Google Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
Cloud Functions Simple(st) HTTP Multi-host Proxy
Tweaked yesterday’s solution so that it will randomly select one from several hosts with which it’s configured.
package proxy
import (
"log"
"math/rand"
"net/http"
"net/url"
"os"
"strings"
"time"
)
func robin() {
hostsList := os.Getenv("PROXY_HOST")
if hostsList == "" {
log.Fatal("'PROXY_HOST' environment variable should contain comma-separated list of hosts")
}
// Comma-separated lists of hosts
hosts := strings.Split(hostsList, ",")
urls := make([]*url.URL, len(hosts))
for i, host := range hosts {
var origin = Endpoint{
Host: host,
Port: os.Getenv("PROXY_PORT"),
}
url, err := origin.URL()
if err != nil {
log.Fatal(err)
}
urls[i] = url
}
s := rand.NewSource(time.Now().UnixNano())
q := rand.New(s)
Handler = func(w http.ResponseWriter, r *http.Request) {
// Pick one of the URLs at random
url := urls[q.Int31n(int32(len(urls)))]
log.Printf("[Handler] Forwarding: %s", url.String())
// Forward to it
reverseproxy(url, w, r)
}
}
This requires a minor tweak to the deployment to escape the commas within the PROXY_HOST
string to disambiguate these for gcloud
:
Cloud Functions Simple(st) HTTP Proxy
I’m investigating the use of LetsEncrypt for gRPC services. I found this straightforward post by Scott Devoid and am going to try this approach.
Before I can do that, I need to be able to publish services (make them Internet-accessible) and would like to try to continue to use GCP for free.
Some time ago, I wrote about using the excellent Microk8s on GCP. Using an f1-micro
, I’m hoping (!) to stay within the Compute Engine free tier. I’ll also try to be diligent and delete the instance when it’s not needed. This gives me a runtime platform and I can expose services to the Instance’s (Node)Ports but, I’d prefer to not be billed for a simple proxy.