See what's new on Keypup!  📢
Development

Connecting to Memorystore from Cloud Run

Arnaud Lachaume
Arnaud LachaumeLink to author's LinkedIn profile
Calendar
June 8, 2021
icon timer
5
min

Connect to Memorystore from Cloud Run containers using an SSH tunnel via GCE.

Table of Content

UPDATE: Since the release of this post GCP has rolled out an official way of connecting to Memorystore from Cloud Run. You can checkout the instructions here. The blog post below is still applicable if you need to connect your Cloud Run application to an on-premise/cloud system via a secure tunnel.

In November last year Google Cloud Platform (GCP) released Cloud Run in General Availability (GA) to help companies move to serverless.

As part of their GA they introduced an easy way to connect to Cloud SQL via container SQL connections. However this connection is just an ad-hoc solution and there is currently no generic way to connect Cloud Run services to internal VPC services such as Memorystore.

GCP announced early this year the release of VPC Serverless Connectors for App Engine and Cloud Functions to support direct connections to internal VPC services. This is great news of course, but support for Cloud Run is still "coming soon" with no expected release date.

So what are your options if you are eager to move to Cloud Run but your app is using Memorystore? For now the only advertised solution is to deploy a dedicated Cloud Run cluster via Anthos on GKE within your VPC. A bit of a bazooka to kill a fly I'm afraid....

...unless...

SSH tunnels to the rescue

It is actually possible to create an SSH tunnel from your container to your VPC via Google Compute Engine (GCE) to access Memorystore.

The gcloud CLI actually provides a very easy way to do this:

gcloud compute ssh --zone=us-central1-a gce-instance-name --ssh-flag="-N -L 6379:10.0.0.5:6379"

This command authenticates the current user/service and creates the following SSH tunnel:

localhost:6379 -> gce-instance-name -> 6379:10.0.0.5

Using the command above you can access any privately hosted VPC service from a public network such as Cloud Run. The SSH connection is authenticated by IAM when you run this command.

The only thing you need is create a GCE instance to act as a forwarder.

gcloud compute instances create gce-instance-name --machine-type=f1-micro --zone=us-central1-a

A simple approach with Cloud Run

First make sure you have created a forwarder instance on GCE:

gcloud compute instances create vpc-forwarder --machine-type=f1-micro --zone=us-central1-a

Now here is how to build your Cloud Run Docker container to make this all work. In this example we're containerizing a Rails app but feel free to adapt the below based on the stack you use.

Dockerfile

The file installs the gcloud CLI, generates an SSH key for GCE, adds everything we need for our Rails service and declares our entrypoint.sh script as the container entrypoint.

entrypoint.sh

And now here is our entrypoint script. The scripts starts an SSH tunnel in the background then starts our application in the foreground.

With the configuration above our Redis Memorystore will be available on localhost:6379. Simply edit your application config to reflect that.

We can now build and deploy our service:


Alright! That was easy!

...

There are a few caveats though:

  • Firewall error: You might end up with an error if your GCE firewall rules prevent SSH connections on port 22. If that's the case you should allow SSH connection in your firewall rules. You can run the entrypoint gcloud command locally on your machine then use the redis-cli to ensure that you can actually connect to Memorystore via this SSH tunnel.
  • Permission error: You might receive a permission error because the service account used by your Cloud Run service is not authorized to add SSH keys to your GCE instance. Adding SSH keys requires the roles/compute.instanceAdmin IAM role, which is quite a powerful role to give to a service account. That's not really acceptable.
  • Security: We're backing the SSH keys inside the container image to speedup container boot time which is not ideal.
  • Security: The gcloud CLI bypasses SSH strict host checking the first time it runs in the container, which theoretically opens the container to MITM attacks.
  • Build complexity: We're installing the gcloud CLI just for a tunnel? Seems a bit of an overkill. This is just smart SSH after all.
  • Failover: What happens if your forwarder crashes? Your app will crash. We need redundancy.

A more complete approach

Let's modify our approach to simplify the build, add failover capabilities and increase the overall security of our setup.

Prepare the forwarders

Here are the preparation steps to generate an SSH key for the tunnel and setup the forwarders

Prepare your service

On Cloud Run create a new environment variable on your service called TUNNEL_PRIVATE_KEY and paste the content of the command below. The command simply replaces newline characters with literal '\n' characters.

echo $(awk '{printf "%s\\n", $0}' ~/.ssh/google_compute_engine)

Now remove this private key from your machine:

rm -f ~/.ssh/google_compute_*

That's all we need to prepare our service. Let's move to the build part now.

Dockerfile

Our Dockerfile is a bit simplified as we removed the steps to setup the gcloud CLI and SSH key.

entrypoint.sh

Our entrypoint hasn't changed much. Instead of calling the gcloud CLI we simply run redis-tunnel.sh in the background (see next section).

redis-tunnel.sh

Now this is where the magic happens.

This script configures the SSH key we stored in our TUNNEL_PRIVATE_KEY environment variable then attempts to open SSH tunnels via vpc-forwarder-1 or vpc-forwarder-2 depending on availability.

You should adapt this script by setting the variables below. Alternatively you can remove these variables from the script and instead configure environment variables on your service.

  • MEMORYSTORE_IP: the IP address of your memorystore instance
  • FORWARDER_1_IP: the static IP of your first forwarder
  • FORWARDER_2_IP: the static IP of your second forwarder

If you edit the FORWARDER_*_ALIAS variables then make sure to also edit the google_compute_known_hosts file.

You can now build and deploy your service:

And voila! You can now use Cloud Run as much as you like!

Moving to serverless using Rails?

Running background jobs in a serverless context can be a pain.

Fortunately we open sourced a library called Cloudtasker which makes using Google Cloud Tasks with Rails and Cloud Run a breeze! If you love Sidekiq then you'll love Cloudtasker.

Feel free to check it out or read our blog post about it.

Sign-up and accelerate your engineering organization today !

Request a demo

About us

Keypup's SaaS solution allows engineering teams and all software development stakeholders to gain a better understanding of their engineering efforts by combining real-time insights from their development and project management platforms. The solution integrates multiple data sources into a unified database along with a user-friendly dashboard and insights builder interface. Keypup users can customize tried-and-true templates or create their own reports, insights and dashboards to get a full picture of their development operations at a glance, tailored to their specific needs.

---

Code snippets hosted with ❤ by GitHub.