Quick Facts
- Category: Technology
- Published: 2026-05-04 00:12:51
- Deep Dive: Open source package with 1 million monthly downloads stole user cr...
- Product Builders Warned: Feature First Approach Dooms Financial Apps as 'Bedrock' Strategy Emerges
- Tesla Ordered to Pay $10,600 Over Misleading FSD Claims — But Automaker Refuses to Settle
- How Apple Achieved 99% Customer Satisfaction with the iPhone 17: A Step-by-Step Guide
- The Rise of SaaS Extortion: How Cordial and Snarky Spiders Exploit Vishing and SSO Weaknesses
The Kubernetes community has marked a major milestone: In-Place Pod-Level Resources Vertical Scaling has graduated to Beta in version 1.36, now enabled by default. This means operators can adjust the aggregate resource budget of a running pod without necessarily restarting its containers – a leap forward for dynamic workload management.
“This feature closes a critical gap for complex pods, especially those with sidecars or multiple containers sharing a resource pool,” said a senior Kubernetes SIG Node maintainer. “It offers a safe, automated path to scale up under load while minimizing disruption.”
Background
The journey began in v1.34, when Pod-Level Resources graduated to Beta, allowing an overall resource budget per pod rather than per container. v1.35 made In-Place Vertical Scaling generally available for individual containers. Now v1.36 combines these into a unified capability: in-place scaling of the pod-level budget, often without a container restart.
The feature is controlled by the InPlacePodLevelResourcesVerticalScaling feature gate, which is now turned on by default. This enables updates to .spec.resources at the pod level while the pod is running.
How It Works
When a pod-level resize is initiated, the kubelet evaluates each container’s resizePolicy. Containers with NotRequired get their cgroup limits updated on the fly via the Container Runtime Interface (CRI). Containers with RestartContainer will be restarted to apply the new boundary safely.
This per-container policy allows operators to mix zero-downtime and disruptive updates within the same pod. For example, a main application may accept live resource changes while a sidecar requires a restart for certain adjustments.
Example: Scaling a Shared Pool
Consider a pod with a 2 CPU limit at the pod level and no per-container limits. Applying a patch to double the CPU to 4 CPUs triggers the kubelet to resize the shared pool. The kubelet first checks node capacity, then updates cgroups for containers that allow non-restart updates, and finally restarts those that require it.
- Initial state: Pod spec with
resources.limits.cpu: "2"and two containers both withrestartPolicy: NotRequiredfor CPU. - Resize operation:
kubectl patch pod ... --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}' - Outcome: Both containers inherit the new 4 CPU limit without restart, as long as the resize policy allows it.
What This Means
For cluster operators, the beta graduation reduces operational friction. Previously, adjusting a pod’s resource pool often required a rolling update or manual per-container recalculations. Now, a simple API call adjusts the shared budget, and the system handles the rest.
This is particularly powerful for sidecar-heavy deployments, logging aggregators, and service meshes where containers need to flex together under traffic spikes. The kubelet’s built-in safety checks – node capacity, feasibility, and sequence validation – ensure node stability even during rapid scaling events.
Maintainers expect the feature to move toward general availability in a future release, but v1.36 already offers production-grade capabilities for many use cases. “We encourage users to test in non-critical workloads first,” the SIG Node maintainer added, “but the feedback from early adopters has been very positive.”