Many microservices applications are primarily configured through environment
variables nowadays. If you’re deploying to Cloud Run with gcloud
CLI
specifying a lot of environment variables might look rather painful:
Good news everyone: We finally managed to make deploying serverless containers
as simple as gcloud run deploy --image=[IMAGE]
. This command deploys an
application to Cloud Run with the given docker image, but what does really
happen behind the scenes to make this happen?
Google’s serverless containers as a service (CaaS) platform Cloud Run claims
to implement the Knative API and its runtime
contract.
If true, this would mean that with the same YAML manifest file, you can run your
apps on Google’s infrastructure, or a Kubernetes cluster anywhere.
I’ve been leading the Krew “kubectl plugin manager” (a Kubernetes
sub-project). Today, Krew is used to distribute over 70 kubectl plugins.
This week, I finally took some time to write my first proper plugin.
Knative project is usually explained as building blocks for
“serverless on Kubernetes”. As a result of this implication, most Kubernetes
users are not aware of what Knative can do for their non-serverless workloads:
Better autoscaling and networking for stateless microservices on Kubernetes.
If you need to inspect kubectl
network traffic, you can add verbose logging
options (-v=8
or higher) to any kubectl
command and you can see the URLs,
request and response body/headers (except authorization). These headers usually
are not complete, because more headers are added after the request is logged. To
have a complete view, you need to intercept traffic using a local proxy like
mitmproxy.
Google Cloud Run now has support for unary gRPC requests (i.e.
non-streaming methods). This guide explains how to authenticate to a private
gRPC service running on Cloud Run.
WARNING
As of October 2020, there’s now an official feature in Cloud Run to configure static IPs using VPC and NAT. I have written an official guide to set up static outbound IPs. Please do not apply the workaround in this article anymore.
If you are migrating to serverless with Google Cloud Run from your on-premises datacenters, or Google Kubernetes Engine (GKE) clusters, you might notice that Cloud Run has a slightly different networking stack than GCE/GKE.
When accessing endpoints that require “IP whitelisting” (such as Cloud Memorystore, or something on your corporate network) from Cloud Run, you can’t easily have static IPs for your Cloud Run applications. This is because currently you can’t configure Cloud NAT or Serverless VPC Access yet.
Until we support these on Cloud Run, I want to share a workaround (with example
code) that involves routing the egress traffic of a Cloud Run application
through a GCE instance with a static IP address.
Many Google Cloud Run users are starting to develop containers for the first time, but often they are migrating their existing applications. Sometimes, these apps aren’t designed as microservices that fit one-process-per-container model, and require multiple server processes running together in a container.
Often you will hear “running multiple processes in a container is bad”, although nothing is wrong with doing so, as I explained in my previous article comparing init systems optimized for containers.
In this article, I’ll show a not super production-ready (hence “the lazy way”)
but working solution for running multi-process containers on Cloud Run, and will
provide example code.
If you are developing containers you must have heard the “single process per
container” mantra. Inherently, there’s nothing wrong1 with running multiple
processes in a container, as long as your ENTRYPOINT
is a proper
init process. Some use cases are having processes are aiding each other
(such as a sidecar proxy process) or porting legacy applications.
Recently, I had to spawn a sidecar process inside a container. Docker’s own
tutorial
for running multiple processes in a container is a good place to start, but not
production-ready. So I outsourced my quest on Twitter to find an
init
replacement that can:
init
process) responsibilities like zombie child
reaping and signal forwarding.In this article I explored pros and cons of some of the options like
supervisord, runit, monit, tini/dumb-init, s6 (audience favorite),
and tini+bash4.x combo (personal favorite).