包阅导读总结
关键词:
Kubernetes v1.31、Elli、新特性、稳定版、测试版
总结:
本文主要介绍了 Kubernetes v1.31 的发布,包括其主题“Elli”及代表的社区精神,还列举了稳定版、测试版和处于 Alpha 阶段的新特性,如 AppArmor 支持稳定化、nftables 后端进入测试版等,强调了社区贡献和项目的持续发展。
主要内容:
– Kubernetes v1.31 发布
– 主题为“Elli”,是一只可爱欢乐的狗,象征着社区精神。
– 历经十年,不断发展,仍朝着新方向迈进。
– 稳定版特性
– AppArmor 支持稳定。
– kube-proxy 改进入口连接可靠性。
– 持久卷最后阶段转换时间特性稳定。
– 测试版特性
– kube-proxy 的 nftables 后端。
– 更改持久卷的回收策略。
– 绑定服务账户令牌改进。
– 支持多个服务 CIDR。
– 服务的流量分布。
– Kubernetes VolumeAttributesClass 。
– Alpha 阶段特性
– 新的 DRA API 用于更好的加速器和其他硬件管理。
思维导图:
文章地址:https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/
文章来源:kubernetes.io
作者:Kubernetes Blog
发布时间:2024/8/13 0:00
语言:英文
总字数:4125字
预计阅读时间:17分钟
评分:88分
标签:Kubernetes,Release,Container Orchestration,AppArmor,kube-proxy
以下为原文内容
本内容来源于用户推荐转载,旨在分享知识与观点,如有侵权请联系删除 联系邮箱 media@ilingban.com
Kubernetes v1.31: Elli
Editors: Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith
Announcing the release of Kubernetes v1.31: Elli!
Similar to previous releases, the release of Kubernetes v1.31 introduces newstable, beta, and alpha features.The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.This release consists of 45 enhancements.Of those enhancements, 11 have graduated to Stable, 22 are entering Beta,and 12 have graduated to Alpha.
Release theme and logo
The Kubernetes v1.31 Release Theme is “Elli”.
Kubernetes v1.31’s Elli is a cute and joyful dog, with a heart of gold and a nice sailor’s cap, as a playful wink to the huge and diverse family of Kubernetes contributors.
Kubernetes v1.31 marks the first release after the project has successfully celebrated its first 10 years.Kubernetes has come a very long way since its inception, and it’s still moving towards exciting new directions with each release.After 10 years, it is awe-inspiring to reflect on the effort, dedication, skill, wit and tiring work of the countless Kubernetes contributors who have made this a reality.
And yet, despite the herculean effort needed to run the project, there is no shortage of people who show up, time and again, with enthusiasm, smiles and a sense of pride for contributing and being part of the community.This “spirit” that we see from new and old contributors alike is the sign of a vibrant community, a “joyful” community, if we might call it that.
Kubernetes v1.31’s Elli is all about celebrating this wonderful spirit! Here’s to the next decade of Kubernetes!
Highlights of features graduating to Stable
This is a selection of some of the improvements that are now stable following the v1.31 release.
AppArmor support is now stable
Kubernetes support for AppArmor is now GA. Protect your containers using AppArmor by setting the appArmorProfile.type
field in the container’s securityContext
.Note that before Kubernetes v1.30, AppArmor was controlled via annotations; starting in v1.30 it is controlled using fields.It is recommended that you should migrate away from using annotations and start using the appArmorProfile.type
field.
To learn more read the AppArmor tutorial.This work was done as a part of KEP #24, by SIG Node.
Improved ingress connectivity reliability for kube-proxy
Kube-proxy improved ingress connectivity reliability is stable in v1.31.One of the common problems with load balancers in Kubernetes is the synchronization between the different components involved to avoid traffic drop.This feature implements a mechanism in kube-proxy for load balancers to do connection draining for terminating Nodes exposed by services of type: LoadBalancer
and externalTrafficPolicy: Cluster
and establish some best practices for cloud providers and Kubernetes load balancers implementations.
To use this feature, kube-proxy needs to run as default service proxy on the cluster and the load balancer needs to support connection draining.There are no specific changes required for using this feature, it has been enabled by default in kube-proxy since v1.30 and been promoted to stable in v1.31.
For more details about this feature please visit the Virtual IPs and Service Proxies documentation page.
This work was done as part of KEP #3836 by SIG Network.
Persistent Volume last phase transition time
Persistent Volume last phase transition time feature moved to GA in v1.31.This feature adds a PersistentVolumeStatus
field which holds a timestamp of when a PersistentVolume last transitioned to a different phase.With this feature enabled, every PersistentVolume object will have a new field .status.lastTransitionTime
, that holds a timestamp ofwhen the volume last transitioned its phase.This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (Pending
, Bound
, or Released
) after upgrading to Kubernetes v1.31.This allows you to measure time between when a PersistentVolume moves from Pending
to Bound
. This can be also useful for providing metrics and SLOs.
For more details about this feature please visit the PersistentVolume documentation page.
This work was done as a part of KEP #3762 by SIG Storage.
Highlights of features graduating to Beta
This is a selection of some of the improvements that are now beta following the v1.31 release.
nftables backend for kube-proxy
The nftables backend moves to beta in v1.31, behind the NFTablesProxyMode
feature gate which is now enabled by default.
The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables.The nftables
proxy mode is able to process changes to service endpoints faster and more efficiently than the iptables
mode, and is also able to more efficiently process packets in the kernel (though this onlybecomes noticeable in clusters with tens of thousands of services).
As of Kubernetes v1.31, the nftables
mode is still relatively new, and may not be compatible with all network plugins; consult the documentation for your network plugin.This proxy mode is only available on Linux nodes, and requires kernel 5.13 or later.Before migrating, note that some features, especially around NodePort services, are not implemented exactly the same in nftables mode as they are in iptables mode.Check the migration guide to see if you need to override the default configuration.
This work was done as part of KEP #3866 by SIG Network.
Changes to reclaim policy for PersistentVolumes
The Always Honor PersistentVolume Reclaim Policy feature has advanced to beta in Kubernetes v1.31.This enhancement ensures that the PersistentVolume (PV) reclaim policy is respected even after the associated PersistentVolumeClaim (PVC) is deleted, thereby preventing the leakage of volumes.
Prior to this feature, the reclaim policy linked to a PV could be disregarded under specific conditions, depending on whether the PV or PVC was deleted first.Consequently, the corresponding storage resource in the external infrastructure might not be removed, even if the reclaim policy was set to “Delete”.This led to potential inconsistencies and resource leaks.
With the introduction of this feature, Kubernetes now guarantees that the “Delete” reclaim policy will be enforced, ensuring the deletion of the underlying storage object from the backend infrastructure, regardless of the deletion sequence of the PV and PVC.
This work was done as a part of KEP #2644 and by SIG Storage.
Bound service account token improvements
The ServiceAccountTokenNodeBinding
feature is promoted to beta in v1.31.This feature allows requesting a token bound only to a node, not to a pod, which includes node information in claims in the token and validates the existence of the node when the token is used.For more information, read the bound service account tokens documentation.
This work was done as part of KEP #4193 by SIG Auth.
Multiple Service CIDRs
Support for clusters with multiple Service CIDRs moves to beta in v1.31 (disabled by default).
There are multiple components in a Kubernetes cluster that consume IP addresses: Nodes, Pods and Services.Nodes and Pods IP ranges can be dynamically changed because depend on the infrastructure or the network plugin respectively.However, Services IP ranges are defined during the cluster creation as a hardcoded flag in the kube-apiserver.IP exhaustion has been a problem for long lived or large clusters, as admins needed to expand, shrink or even replace entirely the assigned Service CIDR range.These operations were never supported natively and were performed via complex and delicate maintenance operations, often causing downtime on their clusters. This new feature allows users and cluster admins to dynamically modify Service CIDR ranges with zero downtime.
For more details about this feature please visit theVirtual IPs and Service Proxies documentation page.
This work was done as part of KEP #1880 by SIG Network.
Traffic distribution for Services
Traffic distribution for Services moves to beta in v1.31 and is enabled by default.
After several iterations on finding the best user experience and traffic engineering capabilities for Services networking, SIG Networking implemented the trafficDistribution
field in the Service specification, which serves as a guideline for the underlying implementation to consider while making routing decisions.
For more details about this feature please read the1.30 Release Blogor visit the Service documentation page.
This work was done as part of KEP #4444 by SIG Network.
Kubernetes VolumeAttributesClass ModifyVolume
VolumeAttributesClass API is moving to beta in v1.31.The VolumeAttributesClass provides a generic,Kubernetes-native API for modifying dynamically volume parameters like provisioned IO.This allows workloads to vertically scale their volumes on-line to balance cost and performance, if supported by their provider.This feature had been alpha since Kubernetes 1.29.
This work was done as a part of KEP #3751 and lead by SIG Storage.
New features in Alpha
This is a selection of some of the improvements that are now alpha following the v1.31 release.
New DRA APIs for better accelerators and other hardware management
Kubernetes v1.31 brings an updated dynamic resource allocation (DRA) API and design.The main focus in the update is on structured parameters because they make resource information and requests transparent to Kubernetes and clients and enable implementing features like cluster autoscaling.DRA support in the kubelet was updated such that version skew between kubelet and the control plane is possible. With structured parameters, the scheduler allocates ResourceClaims while scheduling a pod.Allocation by a DRA driver controller is still supported through what is now called “classic DRA”.
With Kubernetes v1.31, classic DRA has a separate feature gate named DRAControlPlaneController
, which you need to enable explicitly.With such a control plane controller, a DRA driver can implement allocation policies that are not supported yet through structured parameters.
This work was done as part of KEP #3063 by SIG Node.
Support for image volumes
The Kubernetes community is moving towards fulfilling more Artificial Intelligence (AI) and Machine Learning (ML) use cases in the future.
One of the requirements to fulfill these use cases is to support Open Container Initiative (OCI) compatible images and artifacts (referred as OCI objects) directly as a native volume source.This allows users to focus on OCI standards as well as enables them to store and distribute any content using OCI registries.
Given that, v1.31 adds a new alpha feature to allow using an OCI image as a volume in a Pod.This feature allows users to specify an image reference as volume in a pod while reusing it as volumemount within containers. You need to enable the ImageVolume
feature gate to try this out.
This work was done as part of KEP #4639 by SIG Node and SIG Storage.
Exposing device health information through Pod status
Expose device health information through the Pod Status is added as a new alpha feature in v1.31, disabled by default.
Before Kubernetes v1.31, the way to know whether or not a Pod is associated with the failed device is to use the PodResources API.
By enabling this feature, the field allocatedResourcesStatus
will be added to each container status, within the .status
for each Pod. The allocatedResourcesStatus
field reports health information for each device assigned to the container.
This work was done as part of KEP #4680 by SIG Node.
Finer-grained authorization based on selectors
This feature allows webhook authorizers and future (but not currently designed) in-tree authorizers toallow list and watch requests, provided those requests use label and/or field selectors.For example, it is now possible for an authorizer to express: this user cannot list all pods, but can list all pods where .spec.nodeName
matches some specific value. Or to allow a user to watch all Secrets in a namespacethat are not labelled as confidential: true
.Combined with CRD field selectors (also moving to beta in v1.31), it is possible to write more secureper-node extensions.
This work was done as part of KEP #4601 by SIG Auth.
Restrictions on anonymous API access
By enabling the feature gate AnonymousAuthConfigurableEndpoints
users can now use the authentication configuration file to configure the endpoints that can be accessed by anonymous requests.This allows users to protect themselves against RBAC misconfigurations that can give anonymous users broad access to the cluster.
This work was done as a part of KEP #4633 and by SIG Auth.
Graduations, deprecations, and removals in 1.31
Graduations to Stable
This lists all the features that graduated to stable (also known as general availability). For a full list of updates including new features and graduations from alpha to beta, see the release notes.
This release includes a total of 11 enhancements promoted to Stable:
Deprecations and Removals
As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project’s overall health.See the Kubernetes deprecation and removal policy for more details on this process.
Cgroup v1 enters the maintenance mode
As Kubernetes continues to evolve and adapt to the changing landscape of container orchestration, the community has decided to move cgroup v1 support into maintenance mode in v1.31.This shift aligns with the broader industry’s move towards cgroup v2, offering improved functionality, scalability, and a more consistent interface.Kubernetes maintance mode means that no new features will be added to cgroup v1 support.Critical security fixes will still be provided, however, bug-fixing is now best-effort, meaning major bugs may be fixed if feasible, but some issues might remain unresolved.
It is recommended that you start switching to use cgroup v2 as soon as possible.This transition depends on your architecture, including ensuring the underlying operating systems and container runtimes support cgroup v2 and testing workloads to verify that workloads and applications function correctly with cgroup v2.
Please report any problems you encounter by filing an issue.
This work was done as part of KEP #4569 by SIG Node.
A note about SHA-1 signature support
In go1.18 (released in March 2022), the crypto/x509 library started to reject certificates signed with a SHA-1 hash function.While SHA-1 is established to be unsafe and publicly trusted Certificate Authorities have not issued SHA-1 certificates since 2015, there might still be cases in the context of Kubernetes where user-provided certificates are signed using a SHA-1 hash function through private authorities with them being used for Aggregated API Servers or webhooks.If you have relied on SHA-1 based certificates, you must explicitly opt back into its support by setting GODEBUG=x509sha1=1
in your environment.
Given Go’s compatibility policy for GODEBUGs, the x509sha1
GODEBUG and the support for SHA-1 certificates will fully go away in go1.24 which will be released in the first half of 2025.If you rely on SHA-1 certificates, please start moving off them.
Please see Kubernetes issue #125689 to get a better idea of timelines around the support for SHA-1 going away, when Kubernetes releases plans to adopt go1.24, and for more details on how to detect usage of SHA-1 certificates via metrics and audit logging.
The .status.nodeInfo.kubeProxyVersion
field of Nodes has been deprecated in Kubernetes v1.31,and will be removed in a later release.It’s being deprecated because the value of this field wasn’t (and isn’t) accurate.This field is set by the kubelet, which does not have reliable information about the kube-proxy version or whether kube-proxy is running.
The DisableNodeKubeProxyVersion
feature gate will be set to true
in by default in v1.31 and the kubelet will no longer attempt to set the .status.kubeProxyVersion
field for its associated Node.
Removal of all in-tree integrations with cloud providers
As highlighted in a previous article, the last remaining in-tree support for cloud provider integration has been removed as part of the v1.31 release.This doesn’t mean you can’t integrate with a cloud provider, however you now must use therecommended approach using an external integration. Some integrations are part of the Kubernetesproject and others are third party software.
This milestone marks the completion of the externalization process for all cloud providers’ integrations from the Kubernetes core (KEP-2395), a process started with Kubernetes v1.26.This change helps Kubernetes to get closer to being a truly vendor-neutral platform.
For further details on the cloud provider integrations, read our v1.29 Cloud Provider Integrations feature blog.For additional context about the in-tree code removal, we invite you to check the (v1.29 deprecation blog).
The latter blog also contains useful information for users who need to migrate to version v1.29 and later.
Removal of in-tree provider feature gates
In Kubernetes v1.31, the following alpha feature gates InTreePluginAWSUnregister
, InTreePluginAzureDiskUnregister
, InTreePluginAzureFileUnregister
, InTreePluginGCEUnregister
, InTreePluginOpenStackUnregister
, and InTreePluginvSphereUnregister
have been removed. These feature gates were introduced to facilitate the testing of scenarios where in-tree volume plugins were removed from the codebase, without actually removing them. Since Kubernetes 1.30 had deprecated these in-tree volume plugins, these feature gates were redundant and no longer served a purpose. The only CSI migration gate still standing is InTreePluginPortworxUnregister
, which will remain in alpha until the CSI migration for Portworx is completed and its in-tree volume plugin will be ready for removal.
Removal of kubelet --keep-terminated-pod-volumes
command line flag
The kubelet flag --keep-terminated-pod-volumes
, which was deprecated in 2017, has been removed aspart of the v1.31 release.
You can find more details in the pull request #122082.
Removal of CephFS volume plugin
CephFS volume plugin was removed in this release and the cephfs
volume type became non-functional.
It is recommended that you use the CephFS CSI driver as a third-party storage driver instead. If you were using the CephFS volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.
CephFS volume plugin was formally marked as deprecated in v1.28.
Removal of Ceph RBD volume plugin
The v1.31 release removes the Ceph RBD volume plugin and its CSI migration support, making the rbd
volume type non-functional.
It’s recommended that you use the RBD CSI driver in your clusters instead.If you were using Ceph RBD volume plugin before upgrading the cluster version to v1.31, you must re-deploy your application to use the new driver.
The Ceph RBD volume plugin was formally marked as deprecated in v1.28.
Deprecation of non-CSI volume limit plugins in kube-scheduler
The v1.31 release will deprecate all non-CSI volume limit scheduler plugins, and will remove somealready deprected plugins from the default plugins, including:
AzureDiskLimits
CinderLimits
EBSLimits
GCEPDLimits
It’s recommended that you use the NodeVolumeLimits
plugin instead because it can handle the same functionality as the removed plugins since those volume types have been migrated to CSI.Please replace the deprecated plugins with the NodeVolumeLimits
plugin if you explicitly use them in the scheduler config.The AzureDiskLimits
, CinderLimits
, EBSLimits
, and GCEPDLimits
plugins will be removed in a future release.
These plugins will be removed from the default scheduler plugins list as they have been deprecated since Kubernetes v1.14.
Release notes and upgrade actions required
Check out the full details of the Kubernetes v1.31 release in our release notes.
Scheduler now uses QueueingHint when SchedulerQueueingHints
is enabled
Added support to the scheduler to start using a QueueingHint registered for Pod/Updated events,to determine whether updates to previously unschedulable Pods have made them schedulable.The new support is active when the feature gate SchedulerQueueingHints
is enabled.
Previously, when unschedulable Pods were updated, the scheduler always put Pods back to into a queue(activeQ
/ backoffQ
). However not all updates to Pods make Pods schedulable, especially consideringmany scheduling constraints nowadays are immutable. Under the new behaviour, once unschedulable Podsare updated, the scheduling queue checks with QueueingHint(s) whether the update may make thepod(s) schedulable, and requeues them to activeQ
or backoffQ
only when at least oneQueueingHint returns Queue
.
Action required for custom scheduler plugin developers:Plugins have to implement a QueueingHint for Pod/Update event if the rejection from them could be resolved by updating unscheduled Pods themselves. Example: suppose you develop a custom plugin that denies Pods that have a schedulable=false
label. Given Pods with a schedulable=false
label will be schedulable if the schedulable=false
label is removed, this plugin would implement QueueingHint for Pod/Update event that returns Queue when such label changes are made in unscheduled Pods. You can find more details in the pull request #122234.
Removal of kubelet –keep-terminated-pod-volumes command line flag
The kubelet flag --keep-terminated-pod-volumes
, which was deprecated in 2017, was removed as part of the v1.31 release.
You can find more details in the pull request #122082.
Availability
Kubernetes v1.31 is available for download on GitHub or on the Kubernetes download page.
To get started with Kubernetes, check out these interactive tutorials or run local Kubernetes clusters using minikube. You can also easily install v1.31 using kubeadm.
Release team
Kubernetes is only possible with the support, commitment, and hard work of its community.Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on.This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.
We would like to thank the entire release team for the hours spent hard at work to deliver the Kubernetes v1.31 release to our community.The Release Team’s membership ranges from first-time shadows to returning team leads with experience forged over several release cycles.A very special thanks goes out our release lead, Angelos Kolaitis, for supporting us through a successful release cycle, advocating for us, making sure that we could all contribute in the best way possible, and challenging us to improve the release process.
Project velocity
The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
In the v1.31 release cycle, which ran for 14 weeks (May 7th to August 13th), we saw contributions to Kubernetes from 113 different companies and 528 individuals.
In the whole Cloud Native ecosystem we have 379 companies counting 2268 total contributors – which means that respect to the previous release cycle we experienced an astounding 63% increase on individuals contributing!
Source for this data:
By contribution we mean when someone makes a commit, code review, comment, creates an issue or PR, reviews a PR (including blogs and documentation) or comments on issues and PRs.
If you are interested in contributing visit this page to get started.
Check out DevStats to learn more about the overall velocity of the Kubernetes project and community.
Event update
Explore the upcoming Kubernetes and cloud-native events from August to November 2024, featuring KubeCon, KCD, and other notable conferences worldwide. Stay informed and engage with the Kubernetes community.
August 2024
September 2024
- KCD Lahore – Pakistan 2024: September 1, 2024 | Lahore, Pakistan
- KuberTENes Birthday Bash Stockholm: September 5, 2024 | Stockholm, Sweden
- KCD Sydney ’24: September 5-6, 2024 | Sydney, Australia
- KCD Washington DC 2024: September 24, 2024 | Washington, DC, United States
- KCD Porto 2024: September 27-28, 2024 | Porto, Portugal
October 2024
November 2024
Upcoming release webinar
Join members of the Kubernetes v1.31 release team on Thursday, Thu Sep 12, 2024 10am PT to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades.For more information and registration, visit the event page on the CNCF Online Programs site.
Get involved
The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests.Have something you’d like to broadcast to the Kubernetes community?Share your voice at our weekly community meeting, and through the channels below.Thank you for your continued feedback and support.