Ahmet Alp Balkan
  • Blog
  • About
  • Tweets
  • GitHub
  • Talks
  • Effectively specifying environment variables for Cloud Run

    27 April 2020

    Many microservices applications are primarily configured through environment variables nowadays. If you’re deploying to Cloud Run with gcloud CLI specifying a lot of environment variables might look rather painful: Read more →

  • Inside gcloud run deploy

    31 March 2020

    Good news everyone: We finally managed to make deploying serverless containers as simple as gcloud run deploy --image=[IMAGE]. This command deploys an application to Cloud Run with the given docker image, but what does really happen behind the scenes to make this happen? Read more →

  • Is Google Cloud Run really Knative?

    26 March 2020

    Google’s serverless containers as a service (CaaS) platform Cloud Run claims to implement the Knative API and its runtime contract. If true, this would mean that with the same YAML manifest file, you can run your apps on Google’s infrastructure, or a Kubernetes cluster anywhere. Read more →

  • kubectl tree: Visualize Kubernetes object ownership

    02 January 2020

    I’ve been leading the Krew “kubectl plugin manager” (a Kubernetes sub-project). Today, Krew is used to distribute over 70 kubectl plugins. This week, I finally took some time to write my first proper plugin. Read more →

  • Knative = Kubernetes Networking++

    29 October 2019

    Knative project is usually explained as building blocks for “serverless on Kubernetes”. As a result of this implication, most Kubernetes users are not aware of what Knative can do for their non-serverless workloads: Better autoscaling and networking for stateless microservices on Kubernetes. Read more →

  • Inspecting kubectl traffic with mitmproxy

    24 October 2019

    If you need to inspect kubectl network traffic, you can add verbose logging options (-v=8 or higher) to any kubectl command and you can see the URLs, request and response body/headers (except authorization). These headers usually are not complete, because more headers are added after the request is logged. To have a complete view, you need to intercept traffic using a local proxy like mitmproxy. Read more →

  • gRPC Authentication on Cloud Run

    21 October 2019

    Google Cloud Run now has support for unary gRPC requests (i.e. non-streaming methods). This guide explains how to authenticate to a private gRPC service running on Cloud Run. Read more →

  • Cloud Run applications with static outgoing IPs

    24 July 2019

    WARNING

    As of October 2020, there’s now an official feature in Cloud Run to configure static IPs using VPC and NAT. I have written an official guide to set up static outbound IPs. Please do not apply the workaround in this article anymore.

    If you are migrating to serverless with Google Cloud Run from your on-premises datacenters, or Google Kubernetes Engine (GKE) clusters, you might notice that Cloud Run has a slightly different networking stack than GCE/GKE.

    When accessing endpoints that require “IP whitelisting” (such as Cloud Memorystore, or something on your corporate network) from Cloud Run, you can’t easily have static IPs for your Cloud Run applications. This is because currently you can’t configure Cloud NAT or Serverless VPC Access yet.

    Until we support these on Cloud Run, I want to share a workaround (with example code) that involves routing the egress traffic of a Cloud Run application through a GCE instance with a static IP address. Read more →

  • Cloud Run: multiple processes in a container (the lazy way)

    23 July 2019

    Many Google Cloud Run users are starting to develop containers for the first time, but often they are migrating their existing applications. Sometimes, these apps aren’t designed as microservices that fit one-process-per-container model, and require multiple server processes running together in a container.

    Often you will hear “running multiple processes in a container is bad”, although nothing is wrong with doing so, as I explained in my previous article comparing init systems optimized for containers.

    In this article, I’ll show a not super production-ready (hence “the lazy way”) but working solution for running multi-process containers on Cloud Run, and will provide example code. Read more →

  • Choosing an init process for multi-process containers

    15 July 2019

    If you are developing containers you must have heard the “single process per container” mantra. Inherently, there’s nothing wrong1 with running multiple processes in a container, as long as your ENTRYPOINT is a proper init process. Some use cases are having processes are aiding each other (such as a sidecar proxy process) or porting legacy applications.

    Recently, I had to spawn a sidecar process inside a container. Docker’s own tutorial for running multiple processes in a container is a good place to start, but not production-ready. So I outsourced my quest on Twitter to find an init replacement that can:

    1. run multiple child processes, but do not restart them
    2. exit as soon as a child process terminates (no point of restarting child processes, let the container crash to be restarted by docker or Kubernetes)
    3. fulfill PID 1 (init process) responsibilities like zombie child reaping and signal forwarding.

    In this article I explored pros and cons of some of the options like supervisord, runit, monit, tini/dumb-init, s6 (audience favorite), and tini+bash4.x combo (personal favorite). Read more →

  • ««
  • «
  • 2
  • 3
  • 4
  • 5
  • 6
  • »
  • »»

About the Author

I'm a software engineer at LinkedIn's Kubernetes-based compute infrastructure team. I enjoy building tools to orchestrate large-scale compute server fleets and love digging deep on Kubernetes and containers space. In my spare time, I maintain several tools in the Kubernetes open source ecosystem.

About me Other articles Follow on Bluesky Follow on 𝕏