Google Container Registry is probably the easiest to use container image storage solution out there. I want to share some tips and tricks I found out about in the past few months.

GCR 101

GCR is Google Cloud Platform’s private Docker image registry offering. It works with Google Container Engine clusters and Google Compute Engine instances out-of-the box without setting up any authentication. Each Google Cloud project gets a registry named{PROJECT_ID}.   You can pull/push to GCR as follows:

docker build -t{PROJECT_ID}/{image}:tag
gcloud docker -- push{PROJECT_ID}/{image}:tag
gcloud docker -- pull{PROJECT_ID}/{image}:tag

This is about all you should know to use GCR, and that’s the beauty of it.

Push/Pull without gcloud

If you want to avoid using gcloud docker -- prefix and just push/pull with docker CLI, you can run the following command and this will give you short-lived access to GCR:

gcloud docker -a

Then you can do a:

docker [push/pull]{PROJECT_ID}/{image}:tag

as usual with the docker CLI directly.

Build/push without docker

What if you don’t have Docker installed on your machine? gcloud still lets you build images (on the cloud) and push to GCR automatically. If you have a Dockerfile, you can directly build/push without docker:

gcloud container builds submit -t{PROJECT_ID}/{image}:tag .

When you run this, the source code is compressed into a tar file, uploaded to a GCS bucket and then Cloud Container Builder builds it and pushes the resulting image to GCR. I blogged about it before.

List/search images

You can list all the container images in your registry with:

gcloud container images list

This command only shows the last few images. So try adding --limit=99999 for a longer list.

Or search with keywords, as:

gcloud container images list --filter=blog

This command can also be used to list images in a public registry:

gcloud container images list

For public registries, you can directly use “docker search”, too:

docker search

Creating public registries

You can make your entire container registry available to public easily on GCR. Since GCR is currently built on Google Cloud Storage (GCS), running a few commands to make the storage bucket publicly readable is all you need:

gsutil defacl ch -u AllUsers:R gs://artifacts.{PROJECT_ID}

gsutil acl ch -r -u AllUsers:R gs://artifacts.{PROJECT_ID}

gsutil acl ch -u AllUsers:R gs://artifacts.{PROJECT_ID}

The first command updates the default permission, to make the future objects publicly readable. The second command makes existing objects publicly readable, and the third command makes the bucket itself publicly readable. This is documented here.

Note: You will be paying for the egress network costs when other people pull images from your public registry. 

Currently, there is no way to keep the registry default private, and make individual images public as far as I know.

Faster pulls from Docker Hub

GCR offers a hosted registry mirror for Docker Hub. If you configure your Docker Engine to use with --registry-mirror, you can pull Docker Hub images via this mirror. It is faster and you can insulate yourself from Docker Hub outages even further.

Pulling images directly from is not a supported use case, but you still can:

  1. the image has to be one of the official images on Docker Hub (e.g. library/node)
  2. you can only pull the “:latest” tag.


docker pull

You can find the list of supported images via:

gcloud container images list

Find out the total registry size

Again, since GCR uses GCS for storage currently, you have visibility into the storage bucket.

Something good you can do with it is to calculate how much storage space your images are taking. Run:

$ gsutil du -hs gs://artifacts.{PROJECT_ID}
781.16 MiB  gs:// 

It’s not easy to find out total storage used by all tags of a particular image since docker-registry stores all image layers in a single flat directory. You’ll have to write code to do that. But you can find per-tag image sizes on Google Cloud Platform Console → Container Registry section.

Clean up old images

When your Continuous Integration pipeline keeps building and pushing tens of images every day, you will end up with a lot of images after a while. If you no longer need container images older than a period of time, you can delete them with some help from scripting.

To query images pushed older than a date (e.g. 2017-04-01 below):

gcloud container images list-tags \
 --limit=999999 --sort-by=TIMESTAMP \
 --filter="timestamp.datetime < '2017-04-01'" \

You can then extract the image digest from the JSON using the jq command, and then pass it to gcloud command for deletion using a simple for loop:

for digest in $(gcloud container images list-tags \ --limit=999999 \
  --sort-by=TIMESTAMP \
  --filter="timestamp.datetime < '${DELETE_BEFORE}'" \
  --format='get(digest)'; do
    gcloud container images delete -q --force-delete-tags "${IMAGE}@${digest}"

I created a better version of this (with proper error checking etc) as a script you can download and use directly:

Learn more

I hope you enjoyed these tips. If you have more, please let me know in the comments or send an email, and I’ll add it here. Check out the GCR documentation and Cloud Container Builder if you are interested in learning more.

Disclaimer: I work at Google Cloud, but not on the Container Registry product.