Google Cloud Platform Blog
Product updates, customer stories, and tips and tricks on Google Cloud Platform
Kubernetes 1.11: a look from inside Google
Monday, July 2, 2018
By Craig Box, Cloud Native Advocacy Lead, Google
Congratulations to everyone involved in the recent
Kubernetes 1.11 release
. Now that the core has been stabilized, we here at Google have been focusing our upstream work on increasing Kubernetes’ plugability, i.e., moving more pieces out into other repositories. As the project has matured, adding a plugin no longer means "sending
Tim Hockin
a pull request," but instead means creating proper, well-defined interfaces with names like CNI, CRI and CSI. In fact, this maturity and extendability has been one of the things that helps us
make Google Kubernetes Engine an enterprise-ready platform
. Back in March, we gave you
a look at what was new in Kubernetes 1.10
. Now, with the release of 1.11, let’s take a look at the core Kubernetes work that Google is driving, as well as some of the innovation we've built on Kubernetes’ foundations in the last three months.
New features in 1.11
Priority and preemption
Pod priority and preemption is one of the main features of our
internal scheduling system
that lets us achieve high resource utilization in our data centers. We wrote about that key use case when we
introduced it in Alpha in Kubernetes 1.9
, and since then, we’ve added improved scheduling performance and better support for critical system pods. Now, we're pleased to move it to Beta in this release, meaning it’s enabled by default in Kubernetes Engine clusters that run 1.11. This is a feature that many users who run larger clusters have been waiting for!
Changes to CRDs
Custom Resource Definitions (CRDs) are one of the most popular extension mechanisms for Kubernetes, and new features in 1.11 make them even more powerful. CRDs are used for a broad array of Kubernetes extensions, for example to enable the use of Spark or Functions natively through the Kubernetes API.
Kubernetes objects have a schema version (e.g. v1beta1 or v1), but we only ever store one version in the etcd database. When you query an object at a particular version, a server-side conversion is done to convert the object to match the schema of the version you request.
Previously, CRD authors had to delete and recreate resources to move them between different versions. In 1.11,
you can now define multiple versions for your own resources
. The next step will be to enable server-side conversion for CRD, to allow for schema changes like renaming fields, without breaking existing clients.
Cloud Provider plugins
Google continues to invest in the long-term sustainability and multi-cloud portability of core Kubernetes. The
Cloud Provider
interface allows infrastructure providers to deliver a "batteries-included" experience for user workloads on their platform, powering common services like dynamic provisioning and management of storage and external load balancing for Services.
This code is currently compiled into Kubernetes core binaries. Google is leading
a long running effort
to extract this functionality into provider-specific repositories, in order to reduce the scope of the Kubernetes core. This will also allow providers to deliver enhancements and fixes to users more quickly than Kubernetes’ three-month release cadence. As a part of this effort, we’re excited to announce the creation of SIG-Cloud Provider to provide technical oversight and governance for this effort.
New features
not
in 1.11
That's not a headline you normally see, right?
One thing that is not in 1.11 — not even a bit of it — is
Server-side Apply
, a feature which moves the logic for kubectl apply from the client to server, making the expected behavior clearer, and allowing more clients to take advantage of server-side processing without shelling out to kubectl.
Normally, a feature like this would be committed to the project as it was built. But if a release is due, and the feature isn't ready, a large amount of effort would be required to go towards reverting it. Instead, Google has been leading the effort to introduce
feature branches
in Kubernetes, which let us work on long-running features in parallel to the main codebase. This lets us avoid last-minute scrambles to adjust for surprises, and is an example of how we are working to ensure the stability of the Kubernetes project.
Work on server-side apply is happening in the open in its
feature branch
, and we look forward to welcoming it into Kubernetes when it's ready — and not a moment before.
Kubernetes ecosystem work
Our work with Kubernetes doesn't stop at releasing core binaries every three months. Some of the work we are most excited about is in the form of extensions we've released since the last Kubernetes release:
Kustomize
We've thought a lot about how to
declaratively manage application configuration
. A common pattern that we saw was the use of templating solutions such as Helm (based on Google Cloud's
Deployment Manager
), which requires a user to learn a different configuration language than what the API server returns when you query it. A templating approach also means that if you download a YAML example, you have to turn it into a template before you can use it in your environment.
With
kustomize
, we're introducing a new approach to application definition. Kustomize allows you to apply overlays to existing YAML configurations, so you can customize a forked repository with your local changes, or define different configs for 'staging' and 'production' with different configs and replica counts.
Kustomize is well suited for a
GitOps-style workflow
, where there's a common base configuration that is tweaked in various directions with overlays to create different variants. The base and overlays can be managed by separate teams in different repositories.
Application API
Applications are made up of many services and resources, but the whole is more than the sum of its parts. After they are created, there is no well-defined way of identifying all the parts that relate to an application to Kubernetes. We want cluster users to be able to think in terms of their applications, and allow tools and UIs to define, update and display an application-centric view of your cluster.
The new
Application API
provides a way to aggregate Kubernetes components (e.g. Services, Deployments, StatefulSets, Ingresses, CRDs), and manage them as a group.
We have had contributions from friends at Samsung, Bitnami, Heptio, Red Hat and more, and we are looking for more contributions and feedback to ensure that the project adds value across the community.
The Application API is currently in Alpha. We hope to promote it to Beta in the next few weeks, and you'll hear more about it from us then.
Looking forward to Kubernetes Engine
If you'd like to get access to Kubernetes 1.11 on Kubernetes Engine ahead of general availability, please complete
this form
.
And if you liked reading this post, you'll love the
Kubernetes Podcast from Google
, which I co-host with Adam Glick. Every Tuesday we take a look at the week’s news and talk with Googlers or members of the wider Kubernetes community. So far we've spoken about product launches, processes and community, and this week we talk to the Kubernetes 1.11 release leads.
Subscribe now
!
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Firebase Blog
Apigee Blog
Popular Posts
Understanding Cloud Pricing
World's largest event dataset now publicly available in BigQuery
A look inside Google’s Data Center Networks
New in Google Cloud Storage: auto-delete, regional buckets and faster uploads
Enter the Andromeda zone - Google Cloud Platform’s latest networking stack
Labels
Announcements
193
Big Data & Machine Learning
134
Compute
271
Containers & Kubernetes
92
CRE
27
Customers
107
Developer Tools & Insights
151
Events
38
Infrastructure
44
Management Tools
87
Networking
43
Open
1
Open Source
135
Partners
102
Pricing
28
Security & Identity
85
Solutions
24
Stackdriver
24
Storage & Databases
164
Weekly Roundups
20
Feed
Subscribe by email
Demonstrate your proficiency to design, build and manage solutions on Google Cloud Platform.
Learn More
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow