I’ve been leading the Krew “kubectl plugin manager” (a Kubernetes sub-project). Today, Krew is used to distribute over 70 kubectl plugins. This week, I finally took some time to write my first proper plugin. Read More →
Knative project is usually explained as building blocks for “serverless on Kubernetes”. As a result of this implication, most Kubernetes users are not aware of what Knative can do for their non-serverless workloads: Better autoscaling and networking for stateless microservices on Kubernetes. Read More →
If you need to inspect
kubectl network traffic, you can add verbose logging
-v=8 or higher) to any
kubectl command and you can see the URLs,
request and response body/headers (except authorization). These headers usually
are not complete, because more headers are added after the request is logged. To
have a complete view, you need to intercept traffic using a local proxy like
Read More →
As of October 2020, there’s now an official feature in Cloud Run to configure static IPs using VPC and NAT. I have written an official guide to set up static outbound IPs. Please do not apply the workaround in this article anymore.
If you are migrating to serverless with Google Cloud Run from your on-premises datacenters, or Google Kubernetes Engine (GKE) clusters, you might notice that Cloud Run has a slightly different networking stack than GCE/GKE.
When accessing endpoints that require “IP whitelisting” (such as Cloud Memorystore, or something on your corporate network) from Cloud Run, you can’t easily have static IPs for your Cloud Run applications. This is because currently you can’t configure Cloud NAT or Serverless VPC Access yet.
Until we support these on Cloud Run, I want to share a workaround (with example code) that involves routing the egress traffic of a Cloud Run application through a GCE instance with a static IP address. Read More →
Many Google Cloud Run users are starting to develop containers for the first time, but often they are migrating their existing applications. Sometimes, these apps aren’t designed as microservices that fit one-process-per-container model, and require multiple server processes running together in a container.
Often you will hear “running multiple processes in a container is bad”, although nothing is wrong with doing so, as I explained in my previous article comparing init systems optimized for containers.
If you are developing containers you must have heard the “single process per
container” mantra. Inherently, there’s nothing wrong1 with running multiple
processes in a container, as long as your
ENTRYPOINT is a proper
init process. Some use cases are having processes are aiding each other
(such as a sidecar proxy process) or porting legacy applications.
Recently, I had to spawn a sidecar process inside a container. Docker’s own
for running multiple processes in a container is a good place to start, but not
production-ready. So I outsourced my quest on Twitter to find an
init replacement that can:
initprocess) responsibilities like zombie child reaping and signal forwarding.