Below you will find pages that utilize the taxonomy term “Metrics”
Google Metric Diagnostics and Metric Data Ingested
I’ve been on an efficiency drive with Cloud Logging and Cloud Monitoring.
With regards Cloud Logging, I’m contemplating (!) eliminating almost all log storage. As it is I’ve buzz cut log storage with a _Default
sink that has comprehensive sets of NOT LOG_ID(X)
inclusion and exclusion filters. As I was doing so, I began to wonder why I need to pay for the storage of much logging. There’s the comfort from knowing that everything you may ever need is being logged (at least for 30 days) but there’s also the costs that that entails. I use logs exclusively for debugging which got me thinking, couldn’t I just capture logs when I’m debugging (rather thna all the time?). I’ve not taken that leap yet but I’m noodling on it.
Kubernetes metrics, metrics everywhere
I’ve been tinkering with ways to “unit-test” my assumptions when using cloud platforms. I recently wrote about good posts by Google describing achieving cost savings with Cloud Monitoring and Cloud Logging:
- How to identify and reduce costs of your Google Cloud observability in Cloud Monitoring
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
With Cloud Monitoring, I’ve restricted the prometheus.googleapis.com
metrics that are being ingested but realized I wanted to track the number of Pods (and Containers) deployed to a GKE cluster.
Prometheus Exporters for fly.io and Vultr
I’ve been on a roll building utilities this week. I developed a Service Health dashboard for my “thing”, a Prometheus Exporter for Fly.io and today, a Prometheus Exporter for Vultr. This is motivated by the fear that I will forget a deployed Cloud resource and incur a horrible bill.
I’ve no written several Prometheus Exporters for cloud platforms:
- Prometheus Exporter for GCP
- Prometheus Exporter for Linode
- Prometheus Exporter for Fly.io
- Prometheus Exporter for Vultr
Each of them monitors resource deployments and produces resource count metrics that can be scraped by Prometheus and alerted with Alertmanager. I have Alertmanager configured to send notifications to Pushover. Last week I wrote an integration between Google Cloud Monitoring to send notifications to Pushover too.
Multiplexing gRPC and HTTP (Prometheus) endpoints with Cloud Run
Google Cloud Run is useful but, each service is limited to exposing a single port. This caused me problems with a gRPC service that serves (non-gRPC) Prometheus metrics because customarily, you would serve gRPC on one port and the Prometheus metrics on another.
Fortunately, cmux provides a solution by providing a mechanism that multiplexes both services (gRPC and HTTP) on a single port!
TL;DR See the cmux Limitations and use:
grpcl := m.MatchWithWriters( cmux.HTTP2MatchHeaderFieldSendSettings("content-type", "application/grpc"))
Extending the example from the cmux repo: