Cloud Functions Simple(st) HTTP Proxy
- 4 minutes read - 723 wordsI’m investigating the use of LetsEncrypt for gRPC services. I found this straightforward post by Scott Devoid and am going to try this approach.
Before I can do that, I need to be able to publish services (make them Internet-accessible) and would like to try to continue to use GCP for free.
Some time ago, I wrote about using the excellent Microk8s on GCP. Using an f1-micro
, I’m hoping (!) to stay within the Compute Engine free tier. I’ll also try to be diligent and delete the instance when it’s not needed. This gives me a runtime platform and I can expose services to the Instance’s (Node)Ports but, I’d prefer to not be billed for a simple proxy.
It struck me that, I could potentially use Google Cloud Functions (2m calls for free/month) as an economical HTTP proxy. I found Ben Church’s Writing a Reverse Proxy in just one line with Go and tweaked that for my simpler needs:
package proxy
import (
"log"
"net/http"
"net/http/httputil"
"net/url"
"os"
)
// Handler is a function that proxies incoming requests
var Handler func(w http.ResponseWriter, r *http.Request)
func reverseproxy(url *url.URL, w http.ResponseWriter, r *http.Request) {
r.URL.Host = url.Host
r.URL.Scheme = url.Scheme
r.Header.Set("X-Forwarded-Host", r.Header.Get("Host"))
r.Host = url.Host
httputil.NewSingleHostReverseProxy(url).ServeHTTP(w, r)
}
func init() {
var origin = Endpoint{
Host: os.Getenv("PROXY_HOST"),
Port: os.Getenv("PROXY_PORT"),
}
url, err := origin.URL()
if err != nil {
log.Fatal(err)
}
// Once the url is determined, it's static for this proxy handler
Handler = func(w http.ResponseWriter, r *http.Request) {
reverseproxy(url, w, r)
}
}
If you’d like to test this locally, you can wrap the function in main
:
package main
import (
"proxy"
)
func main() {
http.HandleFunc("/", proxy.Handler)
if err := http.ListenAndServe(":80", nil); err != nil {
panic(err)
}
}
and then simply: PROXY_HOST=[[YOUR-HOST]] PROXY_PORT=[[YOUR-PORT]] go run main.go
If you’re using Microk8s as described in my post, there are additional steps for private and public testing.
For private testing, we can use gcloud
SSH port-forwarding to forward requests from a local port to the Microk8s instance. In order to do this, we must first determine the NodePort of our service:
NODEPORT=$(\
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT} \
--command="/snap/bin/microk8s.kubectl get service/nginx --output=jsonpath='{.spec.ports[0].nodePort}'")
gcloud compute ssh ${INSTANCE} \
--zone=${ZONE} \
--project=${PROJECT} \
--ssh-flag="-L ${NODEPORT}:localhost:${NODEPORT}"
NB In the above, I’m assuming your Kubernetes service is called nginx
and that the service has only one port definition.
One the port-forwarding is running, switch to another terminal and then:
PROXY_HOST=localhost PROXY_PORT=${NODEPORT} go run main.go
You should then able to curl localhost:80
to access the Nginx (or other) service running remotely on Microk8s.
The next step is to then expose the Microk8s service to the Internet. Only proceed with the following steps if you’re confident in what you’re doing and aware that this may incur GCP costs.
FIREWALL=lb
gcloud compute firewall-rules create ${FIREWALL} \
--project=${PROJECT} \
--network=default \
--action=ALLOW \
--rules=tcp:${NODEPORT} \
--target-tags=microk8s
NB The tag microk8s
was assigned to the instance when it was created. This permits us to focus the firewall rule.
Lastly, we need the instance’s external IP address, either:
NODEHOST=$(\
gcloud compute instances describe ${INSTANCE} \
--zone=$ZONE \
--project=$PROJECT \
--format="value(networkInterfaces[0].accessConfigs[0].natIP)")
NODEHOST=$(\
gcloud compute instances describe ${INSTANCE} \
--zone=$ZONE \
--project=$PROJECT \
--format="json" \
| jq --raw-output .networkInterfaces[0].accessConfigs[0].natIP)
And, now we can test the proxy locally using the remote service:
PROXY_HOST=${NODEHOST} PROXY_PORT=${NODEPORT} go run main.go
And, when we’re confident with that, we can – if necessary – enable Cloud Functions:
gcloud services enable cloudfunctions.googleapis.com --project=${PROJECT}
And then deploy the proxy:
FUNCTION=proxy
gcloud functions deploy ${FUNCTION} \
--entry-point=Handler \
--runtime=go111 \
--set-env-vars=PROXY_HOST=${NODEHOST},PROXY_PORT=${NODEPORT} \
--trigger-http \
--project=${PROJECT}
And, to confirm it’s working correctly, curl the Functions’ endpoint:
curl $(\
gcloud functions describe ${FUNCTION} \
--project=$PROJECT \
--format="value(httpsTrigger.url)")
Tidy
gcloud functions delete ${FUNCTION} --project=${PROJECT} --quiet
gcloud compute firewall-rules delete ${FIREWALL} --project=${PROJECT} --quiet
gcloud compute instances delete ${INSTANCE} --zone=${ZONE} --project=${PROJECT} --quiet
Or, more emphatically:
gcloud projects delete ${PROJECT} --quiet
Conclusion
There’s likely no better option than Google’s own HTTP/S Load-Balancer or even just Google’s TCP Load-Balancer. But, if you’re trying to save money, it doesn’t get much cheaper than 2m calls/month w/ a Cloud Functions proxy.
One oversight is that, because Cloud Functions automatically serves over TLS (HTTPS), it’s not possible to use this mechanism to expose a service cheaply, in order to get LetsEncrypt to generate a certificate for it.
Back to the drawing board.