I’ve published an article on the Google Cloud Blog about the leader election and distributed consensus concepts, where they’re useful and why they are non-trivial problems. The artice shows you how to implement your own distributed lock easily by using Google Cloud Storage and the consistency guarantees it provides.
After my previous article showing how to build a Google Cloud HTTPS Load Balancer step-by-step from ground up, this time I’m announcing a new official GCP Terraform module that I’ve developed to abstract this away from developers. This module works with Cloud Run, Cloud Functions (GCF) and App Engine (GAE) services.
I’ve published an article titled “Serverless load balancing with Terraform: The hard way” on the Google Cloud blog that details building an HTTPS load balancer with automatic TLS certs for a Cloud Run service. In doing so, I’ve realized this experience is fairly complicated, and started preparing a new Terraform module to make this easier.
Two years ago I have developed grpc_health_probe out of a real necessity: there was no way to health check gRPC applications in an idiomatic and standardized way on Kubernetes. Fast forward two years, the tool has been downloaded over 1.8 million times and used internally at Google as well as externally in many companies. It has raised awareness around gRPC’s builtin health protocol. I wrote about it on Google Cloud blog.
I’ve published an article on the Google Cloud blog detailing how to write and
deploy custom middleware (such as
serverless-registry-proxy)
to customize the behavior of gcr.io
for tasks like serving a public Docker
Registry on a custom domain name.
I’ve posted an announcement on Google Cloud Blog about a new Cloud Run feature we’ve been working that adds server-side streaming support for serverless containers.
In an earlier article,
I have explained that Cloud Run implements the Knative API. In this post
I’ll show you how to use Cloud Run’s client libraries in Go to make API calls to
Knative clusters (on Google or not) with code samples. (I’m guessing only like
10 people will ever need this, 9 of them probably at Google, but here we go).
WARNING
I have now moved this guide to the Cloud Run official documentation. Follow that page for the most up-to-date instructions.
You can now route your users to the nearest Google datacenter to them that has
your Cloud Run application deployed. In this article, I list the commands to
setup a load balancer and deploy an application as “multi-region” with Cloud Run
using gcloud
.
Today we’ve launched the most anticipated Cloud Run feature to beta: You can now access VPC networks from Cloud Run. As you can imagine, this is great news for scenarios like running your frontend on Cloud Run, and calling other backend microservices running on Kubernetes1 over their private IPs.
In this article, we will connect to a Kubernetes service running on a private
GKE cluster from Cloud Run. To visualize this, we’ll be configuring
the following architecture: