Kubernetes Engine and Free Tier
- 3 minutes read - 501 wordsGoogle Cloud Platform Free Tier appears (please verify this for yourself) to provide the ability to run a(n admittedly miniscule) Kubernetes cluster for free. So, why do this? It provides a definitive Kubernetes (Engine) experience on Google Cloud Platform that you may use for learning and testing.
Kubernetes Engine the master node(s) and the control plane are free.
Kubernetes (i.e. Compute Engine) nodes potentially incur charges including for the VM runtime and any attached storage, snapshots etc. However, charges for these resources can be partially covered by the Free Tier.
As of writing, Compute Engine under the Free Tier provides:
- 1 f1-micro instance per month (US regions only — excluding Northern Virginia [us-east4])
- 30 GB-months HDD
- 5 GB-months snapshot in select regions
- 1 GB network egress from North America to all region destinations per month (excluding China and Australia)
Curiously, testing confirms that, to be free, this must be a “non–preemptible” instance. The cheaper “preemptible” instances incur charges. Go figure!
The documentation says: “Your Always Free f1-micro instance limit is by time, not by instance. Each month, eligible use of all of your f1-micro instances are free until you have used a number of hours equal to the total hours in the current month. Usage calculations are combined across the supported regions.”
So, you may use 1 non-preemptible f1-micro instance 24x7 or (!) you may use e.g. 3 f1-micro instances each for 8 hours/day (24/3x7). Or any (!?) multiple thereof e.g. 24 instances for 1 hour/day (24/24x7)
Similarly, the 30 GB-months HDD means you may use 3 instances each with a (paltry) 10GB HDD for (24x7)/3.
Once again, please do not take my word for this. Try it for yourself and verify.
You will incur charges for TCP Load-balancing (--type=LoadBalancer
) and HTTP/S Load-balancing including those provisioned by Ingress objects.
So, you can provision a 3-node Kubernetes cluster in us-west1 (3 zones) using a command of the form:
gcloud beta container clusters create ${CLUSTER} \
--project=${PROJECT} \
--region=us-west1 \
--no-enable-basic-auth \
--no-issue-client-certificate \
--release-channel=rapid \
--machine-type=f1-micro \
--image-type=COS_CONTAINERD \
--disk-type=pd-standard \
--disk-size=10 \
--metadata=disable-legacy-endpoints=true \
--num-nodes=1 \
--enable-stackdriver-kubernetes \
--enable-ip-alias \
--enable-autoupgrade \
--enable-autorepair \
--shielded-secure-boot
You may append --async
to run this command asynchronously.
Because we’re trying to stay within the Free Tier, it’s very important that you delete this cluster before you exhaust the month’s free quota. Unfortunately, there’s currently (!?) no systematic way to have GCP defer the deletion of a resource to a future point in time but, with some diligence in ensuring you use the correct command and hoping that you don’t lose power, Internet access etc., you can create a compensating gcloud beta container clusters delete
and schedule this using Linux’s at
:
echo "gcloud beta container clusters delete ${CLUSTER} --project=${PROJECT} --region=us-west1 --quiet" \
| at -f /dev/stdin now + 2 hour
NB Because at
runs commands under its own environment, you need to be careful with referencing environment variables.
NB It doesn’t make a huge difference but, you can combine (&&
) the create and delete commands.