Today we’ve launched the most anticipated Cloud Run feature to beta: You can now access VPC networks from Cloud Run. As you can imagine, this is great news for scenarios like running your frontend on Cloud Run, and calling other backend microservices running on Kubernetes1 over their private IPs.
In this article, we will connect to a Kubernetes service running on a private GKE cluster from Cloud Run. To visualize this, we’ll be configuring the following architecture:
The steps will rougly be like:
- Create a private GKE cluster and deploy a service.
- Expose your service in VPC network with a load balancer.
- Configure a Cloud Run VPC connector.
- Connecting private Kubernetes cluster from Cloud Run fully managed
Step 1: Deploy a private GKE cluster
Private GKE clusters have the following properties:
- Kubernetes master (API endpoint) can be either on a public or private IP.
- Kubernetes nodes don’t have public IP addresses.
I’ve used this command to create a 1-node private GKE cluster with a public master endpoint (so I can easily access it from my laptop) and private nodes:
gcloud container clusters create "my-private-cluster" \ --create-subnetwork name=priv-cluster-subnet \ --enable-master-authorized-networks \ --master-authorized-networks 0.0.0.0/0 \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.0/28 \ --no-enable-basic-auth \ --no-issue-client-certificate \ --zone=us-central1-b \ --num-nodes=1 \ --machine-type=n1-standard-1
Caveat: Because the private Kubernetes nodes don’t have public IPs, they can’t access public internet to pull container images. You need to configure a NAT instance, like Cloud NAT.
However, thanks to the GCR image
configured on GKE nodes by default, we can download some official Docker images
nginx without having to connect to public Internet.
kubectl create deployment nginx --image=nginx
Step 2: Expose Kubernetes app in VPC network
Normally, Kubernetes pod IPs are available from outside the cluster.
$ kubectl get pods -o=wide NAME READY STATUS IP nginx-65f88748fd-5g6p9 1/1 Running 10.36.1.16
Note that Kubernetes Pods are ephemeral (which means they can disappear and get replaced by new Pods), and therefore their private IP address will change.
To expose a service outside a cluster in a reliable way, we need to provision an Google Cloud Internal Load Balancer on Kubernetes. You can find how to do that here.
Basically, we create a simple Kubernetes
type: LoadBalancer, but
we give it a particular annotation to make it an internal GCLB. I’ll provide a
small command to do that, but you should ideally write a YAML
kubectl create service loadbalancer nginx --tcp=80:80 kubectl annotate service/nginx "cloud.google.com/load-balancer-type=Internal"
About a minute later, you’ll see that the load balancer service you created
has an internal IP address (in this case
$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP nginx LoadBalancer 10.0.42.97 10.0.16.6 ^^^^^^^^^
If you don’t see an external IP assigned, run
kubectl describe service nginx
to see what’s going on, as you might be getting errors in the background.
Pricing: You should probably note that internal load balancers are not free, since they use resources like Forwarding Rules. See the load balancer pricing page.
Step 3: Configure a Cloud Run VPC connector
To access a VPC (virtual private cloud) network from Cloud Run fully managed, you need to create a VPC Access Connector.
Make sure you create the VPC Connector in the same region as your Cloud Run app (not necessarily with the GKE cluster, as GCP VPCs support cross-region traffic).
Pricing: Provisioning a VPC connector will cost you one
f1-microVM (per 100 Mbps of throughput) on GCE, plus the network transfer rates of GCE. Learn more about pricing here.
Step 4: Connect to Kubernetes service from Cloud Run
Finally the moment you’ve been waiting for. To do this, you need to deploy your Cloud Run application with the following option:
gcloud alpha run deploy [...] \ --vpc-connector="<NAME_OF_YOUR_VPC_CONNECTOR>"
Once deployed, your Cloud Run application can now access
10.x.x.x IP addresses
in your VPC. At this point, you should be able to connect to the Internal Load
Balancer of your service from your Cloud Run container.
As we’ve talked before, Kubernetes Pod IPs are also accessible here; however, since Pods will not retain their IP addresses between recreates, you should probably not rely on them.
Make a request to your private Kubernetes service over
enjoy mixing and matching compute platforms you use on Google Cloud! I’ll be
back with more demos and blog posts about this!
You can also access many other private endpoints, like Compute Engine VMs, databases like Cloud SQL, and Redis instances on Cloud Memorystore. ↩︎