Google Cloud Container Builder was announced last month and I have been using it ever since. It has a few features that I really love that have gone unhighlighted. I wrote a testimony on Hacker News when it came out, so I am going to elaborate on that here.

Here’s a short list of the cool features:

If you distribute your apps as containers, you don’t need to host a Jenkins instance or a third-party CI service to build/push your images. Cloud Container Builder not only does this, it can run arbitrary build steps to deploy your application, too.

Disclaimer: I work for Google Cloud, but not on the Container Builder.

Dockerfile support

If your source repository or directory has a Dockerfile(?), Container Builder can build it —period. You have two options to use this feature: Either go to Google Cloud Console and use the UI to import your repository or just build on the cloud with gcloud(?) command-line tool.

So instead of your most favorite command:

docker build -t gcr.io/project-id/app:v1 .

you can run:

gcloud container builds submit -t gcr.io/project-id/app:v1 .

gcloud packages your source directory and builds on cloud by executing the following steps:

  • compress source directory as tarball (.tgz)
  • upload source package to Google Cloud Storage
  • starts a build request on Cloud Container Builder
  • stream the build logs to the user’s console
  • tag the built images
  • push the images to gcr.io registry

If you don’t understand what I just said, it means that you do not need Docker on your machine to build and push Docker images. Here’s a video of this:

GitHub/BitBucket integration

I host all my stuff on GitHub private repositories, including my blog. It took me less than 60 seconds to set up a build for a GitHub repository I have. Google Cloud Console has a GitHub integration, so importing your repos is super easy. You just authorize the app to access your repos, then choose your repo to build.

If your repo has a Dockerfile, you don’t need to write a .travis.yml or circle.yml sort of thing.

Multi-step customizable builds

A single Dockerfile doesn’t always get you what you need. Sometimes you need to execute multiple steps to build your image; I suspect this is why a lot people still use Jenkins.

Cloud Builder addresses this problem elegantly, by providing an option to customize build steps and environments.

Here is what you need to know:

  1. you can customize build steps with a cloudbuild.{yaml,json} file (?)
  2. each build step runs in a container
  3. you can bring your own container images (or use a Google-provided one)
  4. your repository is mounted at /workspace
  5. Contents of /workspace are preserved from one build step to another.

This is exactly like Jenkins build pipelines where your artifacts are stashed/unstashed from one step of the pipeline to another.

To illustrate this better, assume you have an example application:

  • Step 1: (image=golang) compile your Go application
  • Step 2: (image=binary-signing) sign your application binaries
  • Step 3: (image=docker) package your binary as a Docker image

then Container Builder automatically pushes the tagged image to GCR. You can see all sorts of examples of this in the cloud-builders repository.

But before I finish this section, two more cool things:

  • You don’t need to build containers with Cloud Builder: For example I wrote a tutorial on how I publish this blog on Google Cloud Storage using the Container Builder. I don’t build any containers to do that. My customized build step just runs gsutil rsync to upload my blog. Once you realize this, you can basically use it for anything, such as running tests, or making deployments.

  • You can parallelize build steps: Any step you list in the cloudbuild.{yaml/json} file can be executed in parallel. I have this repo where I compile and package 3  different Go binaries in parallel and I save a lot on the build time.

  • Customizable build notifications: See this tutorial for how you can use Google Cloud Functions and Pub/Sub to deliver status of your builds to a Slack channel.

Container Builder is faaaast!

You need to understand that Google has a ton of compute and network power and does not mind allocating that to stuff like Container Builder. The machines running your builds on Container Builder are really fast in terms of CPU, I/O and network.

As you can read in my CircleCI vs Container Builder testimony, I have seen my 3m30s build to come down to 1m10s (that’s 3x faster) when I switched to Container Builder.

Here are some reasons why the Google Container Builder is faster:

  • Your source code is mirrored to Google Cloud, so fetching source is faster.
  • Google allocates a lot of network bandwidth to build machines, therefore your base docker image is pulled fast (unless it’s already cached, because popular base images like golang, python are cached on the build machine already!)
  • Google allocates a lot of compute power to build machines, so compilation is faster than it is on a 3rd party CI service’s free tier (or even the paid subscriptions).
  • The container image is pushed back on Google Container Registry faster because inside-Google traffic is really fast as opposed to outside-in networking.

That said, currently you get 2 hours of build time for free and additional build minutes are subject to charge. I expect most small projects to be happy with the free tier.


I am heavily inspired by Container Builder’s simple build mode. It works out of the box if you just build with Dockerfiles. Yet, if you want to customize, it provides a clean and modular way to do that as well. I already use it to build and deploy my applications and publish my blog.

Definitely check it out and read the docs if you are interested.