Setting up a GCE Instance as an Inlets Exit Node
- 8 minutes read - 1683 wordsThe prolific Alex Ellis has a new project, Inlets.
Here’s a quick tutorial using Google Compute Platform’s (GCP) Compute Engine (GCE).
NB I’m using one of Google’s “Always free” f1-micro instances but you may still pay for network *gress and storage
Assumptions
I’m assuming you’ve a Google account, have used GCP and have a billing account established, i.e. the following returns at least one billing account:
gcloud beta billing accounts list
If you’ve only one billing account and it’s the one you wish to use, then you can:
BILLING=$(gcloud beta billing accounts list --format="value(name)") && echo ${BILLING}
To proceed, you’ll need a billing account established and you may incur charges even though the f1-micro
instance is free.
Environment Variables
VARIABLE |
Description |
---|---|
ACCOUNT |
Corresponds to the identity of the service account that owns ${INSTANCE} |
BILLING |
A Google Billing Account ID see gcloud beta billing accounts list |
DIGEST |
The SHA-256 hash of an Inlets container image on dockerhub |
IP |
The Public IP address of ${INSTANCE} |
PORT |
The Inlets Exit Node port |
PROJECT |
A Google Project ID |
REMOTE |
The Endpoint of the Inlets Exit Node (REMOTE=${IP}:${PORT} ) |
RULE |
A name for the Firewall Rule |
TOKEN |
A shared secret between the Inlet Exit Node and Host Node(s) |
UPSTREAM |
The Endpoint of the (local) HTTP server (e.g. localhost:3000 ) |
ZONE |
A Compute Engine Zone see gcloud compute zones list --project=${PROJECT} |
Create Inlets Exit Node
Google facilitates creating projects per activity and I encourage you to create a project solely to host the Inlets Exit Node.
The following creates a project (${PROJECT}
) and assign it a billing account (${BILLING}
):
PROJECT=[[YOUR-PROJECT-ID]]
BILLING=[[YOUR-BILLING-ID]]
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} --billing-account=${BILLING}
NB See Environment Variables
For completeness, we’ll enable Compute Engine explicitly (though this may be done by default):
gcloud service enable compute.googleapis.com --project=${PROJECT}
Then we’ll create a Compute Engine instance (${INSTANCE}
) of type f1-micro
in zone (${ZONE}
) using Google’s Container-Optimized OS which runs Alex’s Inlets container (see https://hub.docker.com/r/inlets/inlets/tags)
INSTANCE=[[YOUR-INSTANCE-NAME]] # Perhaps 'exit-node'
ZONE="us-west1-c" # Or you preferred zone
DIGEST="sha256:f4e1b7d46c4894930a82cc58ea1d05e987d4d3c9ad1db5b1818cd520c71e8428"
PORT="8090" # Or you preferred port
TOKEN=$(head --bytes=8192 /dev/urandom | sha256sum | head --bytes=64) # Randomly-generated 64-bit SHA-256 Hash
gcloud beta compute instances create-with-container ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--machine-type=f1-micro \
--image-family=cos-stable \
--image-project=cos-cloud \
--boot-disk-size=10GB \
--container-image=inlets/inlets@${DIGEST} \
--container-restart-policy=always \
--container-arg=server \
--container-arg="--port=${PORT}" \
--container-arg="--token=${TOKEN}" \
--labels=project=inlets,language=golang
NB This uses the default GCE service account. It would be even more secure to generate a service account specifically for this instance. In that case, you would need to create the service account and then pass its email address to the above command using --service-account=${ACCOUNT}
NB This container runs the Inlets server using the command server --port=${PORT} --token=${TOKEN}
Once this command completes (hopefully successfully), we can grab the instance’s Public IP address:
IP=$(\
gcloud compute instances describe ${INSTANCE} \
--project=${PROJECT} \
--format="value(networkInterfaces[0].accessConfigs[0].natIP)" \
--zone=${ZONE}) && echo ${IP}
Then we will create a firewall rule (${RULE}
), for this instance (${INSTANCE}
) only, for the port that it uses ({$PORT}
) and we’ll limit this firewall by the instance’s identity (${ACCOUNT}
):
RULE="inlets-allow-${PORT}"
NAME="projects/${PROJECT}/serviceAccounts/[0-9]{12}-compute@developer.gserviceaccount.com"
ACCOUNT=$(\
gcloud iam service-accounts list \
--project=${PROJECT} \
--filter="name ~\"${NAME}\"" \
--format="value(email)")
gcloud compute firewall-rules create ${RULE} \
--project=${PROJECT} \
--direction=INGRESS \
--action=ALLOW \
--rules=tcp:${PORT} \
--source-ranges=0.0.0.0/0 \
--target-service-accounts=${ACCOUNT}
NB ${ACCOUNT}
is determined by filtering the project’s (${PROJECT}
) service accounts by the Compute Engine default account. If you created a non-default service account for this instance, you must use that account’s email address here.
We now have a Compute Engine instance running an Exit Node. This includes the Inlets container running the server. We have created a firewall rule that permits access to it. The next step is to run a local HTTP server and the Inlets client on our Host (local) Node.
Create Inlets Host Node
You will need some HTTP server running (preferably) on your local machine.
If you want to test an arbitrary HTTP server, Alex provides hash-browns
You may clone and run the binary, or use Docker:
go get -u github.com/alexellis/hash-browns
port=3000 go run github.com/alexellis/hash-browns
Or:
docker run \
--interactive --tty \
--publish=3000:3000 \
--env=port=3000 \
alexellis2/hashbrowns:1.2.0
NB In both cases, we’re running this server on port :3000
, you may change as you wish
Now we can run the Inlets client:
REMOTE="${IP}:${PORT}"
UPSTREAM="localhost:3000" # Your local HTTP server's endpoint
docker run \
--interactive --tty \
--net=host \
--env=REMOTE=${REMOTE} \
--env=TOKEN=${TOKEN} \
inlets/inlets@${DIGEST} \
client \
--remote=${REMOTE} \
--upstream=${UPSTREAM} \
--token=${TOKEN}
NB This container runs the Inlets client using the command client --remote=${REMOTE} --upstream=${UPSTREAM} --token=${TOKEN}
Test
TEST="Hello Freddie!"
curl --data "${TEST}" http://${REMOTE}/hash
1c68dba28d1ee45b27c61fbaf2fa0790b93d7b6050a4fcf532b667fc0654d923
printf ${TEST} \
| sha256sum \
| head --bytes=64
1c68dba28d1ee45b27c61fbaf2fa0790b93d7b6050a4fcf532b667fc0654d923
Debugging
You may SSH into the instance (${INSTANCE}
) using:
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE}
Then you can interact with Docker as you would elsewhere:
docker container ls --format="{{.ID}}\t{{.Image}}"
CONTAINER ID IMAGE
a308bd0fb870 inlets/inlets
a5bbfc4e8306 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.2-1.5.33-1-1
docker logs a308
2020/01/22 21:55:05 Welcome to inlets.dev! Find out more at https://github.com/inlets/inlets
2020/01/22 21:55:05 Starting server - version 2.6.3-14-g8fd1007
2020/01/22 21:55:05 Server token: "[[REDACTED]]"
2020/01/22 21:55:05 Control Plane Listening on :8090
2020/01/22 21:55:05 Data Plane Listening on :8090
time="2020-01-22T21:55:15Z" level=info msg="Handling backend connection request [841f86fbaf5848ff85dabed2f67aec69]"
2020/01/22 21:56:58 [e13ee5823c634a798ca25c120c5581d5] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:00 [97b54d40cbdf4ba2ad1c96be7ab039ba] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:20 [0f6de151cbeb4bdcb0a74b0b397e88cb] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:43 [671174838dfd48e584f25e07f1b771d6] proxy 35.230.74.70:8090 POST /hash
To list the containers, you can pass the docker container ls
command to SSH through gcloud:
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--command="docker container ls --format=\"{{.ID}}\t{{.Image}}\""
No zone specified. Using zone [us-west1-c] for instance: [coffee].
CONTAINER ID IMAGE
a308bd0fb870 inlets/inlets
a5bbfc4e8306 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.2-1.5.33-1-1
NB Container-Optimized OS runs Google’s Stackdriver Logging|Monitoring service in the 2nd container
Since Stackdriver’s available, we can pull the container’s logs directly using gcloud:
gcloud logging read "resource.type=\"gce_instance\" jsonPayload.container_id:\"a308bd0fb870\"" \
--project=${PROJECT} \
--order=asc \
--format=json \
| jq -r '.[].jsonPayload.message|rtrimstr("\n")'
2020/01/22 21:55:05 Welcome to inlets.dev! Find out more at https://github.com/inlets/inlets
2020/01/22 21:55:05 Starting server - version 2.6.3-14-g8fd1007
2020/01/22 21:55:05 Server token: "[[REDACTED]]"
2020/01/22 21:55:05 Control Plane Listening on :8090
2020/01/22 21:55:05 Data Plane Listening on :8090
time="2020-01-22T21:55:15Z" level=info msg="Handling backend connection request [841f86fbaf5848ff85dabed2f67aec69]"
2020/01/22 21:56:58 [e13ee5823c634a798ca25c120c5581d5] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:00 [97b54d40cbdf4ba2ad1c96be7ab039ba] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:20 [0f6de151cbeb4bdcb0a74b0b397e88cb] proxy 35.230.74.70:8090 POST /hash
2020/01/22 21:57:43 [671174838dfd48e584f25e07f1b771d6] proxy 35.230.74.70:8090 POST /hash
NB The filter’s container_id
is followed by a colon (:
) meaning “has” (contains) because we only have part of the (short-form) container ID
The logs (for the same period) should match ;-)
Tear-down
If you created a project specifically to test this out, you can delete everything (irrecoverably) using:
gcloud projects delete ${PROJECT} --quiet
NB The above deletes all the resoures contained within the project
Alternatively, you may delete the resources individually:
gcloud compute firewall-rules delete ${RULE} \
--project=${PROJECT}
gcloud compute instances delete ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE}
That’s all!
Appendices
Google Secret Manager
Yesterday, Google announced Secret Manager which provides a way to more securely manage secrets. In this tutorial, the Inlets server and client share a secret (`${TOKEN}). We can use Secret Manager to host this secret for us:
Enable the service:
gcloud services enable secretmanager.googleapis.com \
--project=${PROJECT}
Then persist the token:
SECRET="inlets-token"
printf "%s" ${TOKEN} \
| gcloud beta secrets create ${SECRET} \
--data-file=- \
--project=${PROJECT} \
--replication-policy=automatic
NB Using printf
rather than echo
avoids appending a newline (\n
) to the secret
Then, when it’s needed, we may replace getting the value (i.e. ${TOKEN}
) with $(gcloud beta secrets versions access 1 --secret=${SECRET} --project=${PROJECT})
), i.e.:
gcloud beta compute instances create-with-container ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--machine-type=f1-micro \
--image-family=cos-stable \
--image-project=cos-cloud \
--boot-disk-size=10GB \
--container-image=inlets/inlets@${DIGEST} \
--container-restart-policy=always \
--container-arg=server \
--container-arg="--port=${PORT}" \
--container-arg="--token=$(gcloud beta secrets versions access 1 --secret=${SECRET} --project=${PROJECT})" \
--labels=project=inlets,language=golang
and:
REMOTE="${IP}:${PORT}"
UPSTREAM="localhost:3000" # Your local HTTP server's endpoint
docker run \
--interactive --tty \
--net=host \
--env=REMOTE=${REMOTE} \
--env=TOKEN=$(gcloud beta secrets versions access 1 --secret=${SECRET} --project=${PROJECT}) \
inlets/inlets@${DIGEST} \
client \
--remote=${REMOTE} \
--upstream=${UPSTREAM} \
--token=${TOKEN}
What benefits does this provide? Access to the secret is controlled by IAM and passing the secret is done implicitly.
Google Cloud Run
I was curious to deploy the Inlets server to [Cloud Run] and use this as the Exit Node but it is not possible because Cloud Run does not permit HTTP streaming and this is required by the WebSocket protocol.
NB The following does not currently work!
Enable Cloud Run and Container Registry (GCR):
for SERVICE in "run" "containerregistry"
do
gcloud services enable ${SERVICE}.googleapis.com \
--project=${PROJECT}
done
NB You can run gcloud services enable ... --async ...
but you’ll need to await completion
Pull the Inlet’s container image, retag it and push it to GCR:
docker pull docker.io/inlets/inlets@${DIGEST}
docker tag docker.io/inlets/inlets@${DIGEST} gcr.io/${PROJECT}/inlets # NB this will be untagged
docker push gcr.io/${PROJECT}/inlets # NB this will be tagged `latest` but we'll reference it by SHA256
NB The above is a little messy because we’re trying to use digests instead of tags; there may be a better way to do this?
Then deploy (remember this will succeed but you’ll be unable to use it with WebSocket):
SERVICE=inlets
gcloud beta run deploy ${SERVICE} \
--image=gcr.io/${PROJECT}/inlets@${DIGEST} \
--args="server","--port=${PORT}","--token=${TOKEN}" \
--project=${PROJECT} \
--platform=managed \
--region=${REGION} \
--allow-unauthenticated
NB Be careful and use gcloud beta run ...
, I discovered that gcloud run ...
requires the addition of --command="/usr/bin/inlets"
even though this should not be required because the entrypoint is correctly defined by the container.
Then:
gcloud logging read "resource.type=\"cloud_run_revision\" resource.labels.service_name=\"${SERVICE}\"" \
--project=${PROJECT} \
--order=asc \
--format=json \
| jq -r .[].textPayload
2020/01/23 18:42:32 Welcome to inlets.dev! Find out more at https://github.com/inlets/inlets
2020/01/23 18:42:32 Starting server - version 2.6.3-14-g8fd1007
2020/01/23 18:42:32 Server token: "[[REDACTED]]"
2020/01/23 18:42:32 Control Plane Listening on :8080
2020/01/23 18:42:32 Data Plane Listening on :8080
And, if this were working (which it does not!), you could grab the endpoint using either:
gcloud beta run services describe ${SERVICE} \
--project=${PROJECT} \
--platform=managed \
--region=${REGION} \
--format="value(status.address.url)"
Or:
gcloud beta run services describe ${SERVICE} \
--project=${PROJECT} \
--platform=managed \
--region=${REGION} \
--format="json" \
| jq -r .status.address.url
You could then plug this (TLS!) endpoint and the port (${PORT}
) in as before.
When you’re done, you may delete the Cloud Run service:
gcloud beta run services delete ${SERVICE} \
--project=${PROJECT} \
--platform=managed \
--region=${REGION} \
--quiet
Or delete the project entirely as before.