It has been over two years since we announced Knative. As the project and its community is going strong, I think we made some mistakes in the early positioning and messaging of Knative prevented the project from being a go-to addon for Kubernetes that’s adopted widely.

Because I have never been a decision-maker for the Knative project and its messaging at Google, I can provide an outsider’s perspective despite having worked on different aspects of Knative during this time. There are many reasons (internal to Google) as to why Knative was shaped the way it is, and I won’t go into those here.

I strongly believe that with the right positioning and marketing, Knative could’ve been a tool with high adoption that’s naturally installed on most Kubernetes clusters. I still think that there’s no reason not to install Knative Serving into a cluster where you run services –it is a fabulous abstraction layer that simplifies a dozen of concerns that every service operator goes through.

Knative: Which one?

First version of Knative came with three parts: Serving, Eventing, and Build. These may sound like they are three orthogonal concerns, because they really were. Knative Build was the first part to get separated (and became the Tekton project).

It’s worth noting that “serving” is a needed component for doing serverless, and people do “eventing” on serverless environments, and these two shared some core logic. But beyond that, they don’t have anything in common.

To this day, Knative is still both serving and eventing. This creates confusion that likely impaired adoption decisions because the project really does two things; not one. It’s perfectly normal for a developer trying to learn more about Knative to ask questions “do I have to use both”, “can I install them separately” and end up not using the project due to perceived complexity.

I think Knative should have been just the serving component. It would have a strong brand and message like “an add-on to do better microservices networking” on Kubernetes.

After all, it brings a ton of features like request-based autoscaling, concurrency controls, meat shielding, reproducible deployments (revisions) with traffic splitting abilities, rollbacks, out-of-the box telemetry, scale-to-zero and so on that developers actually care.

Knative: Is it for me?

Our initial announcement and early copies of the project website kept saying Knative is “building-blocks for Kubernetes”. While this is technically correct, I think we overestimated how many people on the planet want to build a Heroku-like PaaS layer on top of Knative (for one, we haven’t built and offered this to our customers).

In reality, there are perhaps a couple of dozen companies who would build their own Kubernetes-based PaaS internally using Knative (S&P has an example of bringing Knative to simplify their internal stack, and Scaleway builds a public FaaS offering on top of Knative), yet, our messaging revolved around these “platform engineers” or operators who could take Knative and build their UI/CLI/… experience on top. This was the target audience for those building blocks Knative had to offer. However, this turned out to be a very small and niche audience.

However, for every platform-builder who we targeted, there are at least a dozen or more Kubernetes users who are not interested in building their internal PaaS, and they could totally use Knative Serving to have an easier time with running apps on Kubernetes. We just didn’t aim for them directly.

Knative Serving is a lot more useful beyond just being a “building block”: I think it is the missing serving layer for running microservices on Kubernetes. It could have been the first thing people installed after creating a cluster.

If all you are doing on Kubernetes is to run services, you can just use the Knative Service API and touch pretty much nothing else in Kubernetes. We accidentally1 created a “single point of interaction” CRD API that worked and saved you from worrying about creating 3-5 Kubernetes objects just to run a service behind a load balancer —but we didn’t talk about it at all. As one of the founders, Evan, puts it "[it would have been] better to be more crisp and less ambiguous, especially when ambiguity provided space for FUD".

Knative: The unfortunate couplings

Something we were never able to communicate properly is that Knative is a very lightweight serving layer that doesn’t take up much space in your cluster and does not eat up your resources. Heck, it can run even on a Raspberry Pi-based cluster. But there is a perceived complexity and large footprint requirement that comes with Knative –and it is not unwarranted. For the longest time, Knative required Istio2, and Istio itself has been infamous for installing 50+ CRDs (APIs) to your cluster. Istio’s installation manifests were not meant to be readable to the human eye.

Istio coupling meant our potential users (who were probably hesitant about installing yet another component to their cluster) were now facing installing half a dozen new components and CRD configs just to install Istio. That is a hard sell for many cluster admins who precisely want to know what goes into their cluster.

Once the first ten thousand eyes evaluated Knative and saw that it required Istio, we eventually fixed this over time3, but perhaps the damage was done. Now that everything is clear in hindsight, the right way to go about this probably would be to ship Knative with its own ingress, (and Knative still offers the Istio integration).

Conclusion

I think Knative wanted to be successful in many areas at the same time and make many stakeholders happy (especially internally at Google). This worked to some extent, but we never invested in developer marketing to show the world that Knative could be a natural add-on to any Kubernetes cluster serving HTTP traffic.

As always, everything is more clear in hindsight. Thankfully, the Knative project is at a great place and the community is going strong. Maybe someday, more people (in masses) will see the parts of Knative that are useful to them and adopt it. Regardless, it’s the friends we made along the way that mattered anyway.


This opinion piece was initially shared internally at Google. Thanks to all reading drafts of this and expressing their opinions and giving historical context.


  1. My bad, not so accidentally. @steren corrected me that there was UX research and proposals behind the Service abstraction. ↩︎

  2. Funny enough, we actually never needed a full mesh for Knative (we just needed an L7 gateway that we and we picked Istio because it’s developed at Google) but users that we scared off definitely did not know that. ↩︎

  3. First by offering an Istio Slim installation, and eventually uncoupling Istio and making the gateways pluggable. ↩︎