In an earlier article,
I have explained that Cloud Run implements the Knative API. In this post
I’ll show you how to use Cloud Run’s client libraries in Go to make API calls to
Knative clusters (on Google or not) with code samples. (I’m guessing only like
10 people will ever need this, 9 of them probably at Google, but here we go).
WARNING
I have now moved this guide to the Cloud Run official documentation. Follow that page for the most up-to-date instructions.
You can now route your users to the nearest Google datacenter to them that has
your Cloud Run application deployed. In this article, I list the commands to
setup a load balancer and deploy an application as “multi-region” with Cloud Run
using gcloud
.
Today we’ve launched the most anticipated Cloud Run feature to beta: You can now access VPC networks from Cloud Run. As you can imagine, this is great news for scenarios like running your frontend on Cloud Run, and calling other backend microservices running on Kubernetes1 over their private IPs.
In this article, we will connect to a Kubernetes service running on a private
GKE cluster from Cloud Run. To visualize this, we’ll be configuring
the following architecture:
Many microservices applications are primarily configured through environment
variables nowadays. If you’re deploying to Cloud Run with gcloud
CLI
specifying a lot of environment variables might look rather painful:
Good news everyone: We finally managed to make deploying serverless containers
as simple as gcloud run deploy --image=[IMAGE]
. This command deploys an
application to Cloud Run with the given docker image, but what does really
happen behind the scenes to make this happen?
Google’s serverless containers as a service (CaaS) platform Cloud Run claims
to implement the Knative API and its runtime
contract.
If true, this would mean that with the same YAML manifest file, you can run your
apps on Google’s infrastructure, or a Kubernetes cluster anywhere.
Knative project is usually explained as building blocks for
“serverless on Kubernetes”. As a result of this implication, most Kubernetes
users are not aware of what Knative can do for their non-serverless workloads:
Better autoscaling and networking for stateless microservices on Kubernetes.
Google Cloud Run now has support for unary gRPC requests (i.e.
non-streaming methods). This guide explains how to authenticate to a private
gRPC service running on Cloud Run.
WARNING
As of October 2020, there’s now an official feature in Cloud Run to configure static IPs using VPC and NAT. I have written an official guide to set up static outbound IPs. Please do not apply the workaround in this article anymore.
If you are migrating to serverless with Google Cloud Run from your on-premises datacenters, or Google Kubernetes Engine (GKE) clusters, you might notice that Cloud Run has a slightly different networking stack than GCE/GKE.
When accessing endpoints that require “IP whitelisting” (such as Cloud Memorystore, or something on your corporate network) from Cloud Run, you can’t easily have static IPs for your Cloud Run applications. This is because currently you can’t configure Cloud NAT or Serverless VPC Access yet.
Until we support these on Cloud Run, I want to share a workaround (with example
code) that involves routing the egress traffic of a Cloud Run application
through a GCE instance with a static IP address.