Technology

Kubernetes v1.36: How to Dynamically Scale Pod Resource Pools Without Restarts

2026-05-01 04:45:30

Introduction

Kubernetes v1.36 brings a powerful new capability: you can now adjust the aggregate Pod-level resource budget (specified in .spec.resources) for a running Pod, often without requiring any container restarts. This feature, called In-Place Pod-Level Resources Vertical Scaling, graduated to Beta and is enabled by default via the InPlacePodLevelResourcesVerticalScaling feature gate. It streamlines management of complex Pods (like those with sidecars) by allowing containers to share a collective pool of CPU and memory. In this guide, you'll learn how to implement this scaling technique step by step.

Kubernetes v1.36: How to Dynamically Scale Pod Resource Pools Without Restarts

What You Need

Step 1: Verify the Feature Gate Is Active

Although the feature is enabled by default in v1.36, confirm it's running in your cluster by checking the kube-apiserver and kubelet feature gates. Run:

kubectl get --raw /api/v1/namespaces/kube-system/configmaps/kube-apiserver-feature-gates -o json | jq '.data'

Look for InPlacePodLevelResourcesVerticalScaling=true. Alternatively, attempt a resize operation on a test Pod; if it succeeds, the feature is active.

Step 2: Define a Pod with Pod-Level Resources and No Per-Container Limits

To take advantage of Pod-level scaling, define a Pod that sets resources at the Pod level (spec.resources) and leaves individual containers without explicit limits. Containers automatically inherit the Pod-level budget. Example YAML:

apiVersion: v1
kind: Pod
metadata:
  name: shared-pool-app
spec:
  resources:
    limits:
      cpu: '2'
      memory: '4Gi'
  containers:
  - name: main-app
    image: my-app:v1
    resizePolicy: [{resourceName: "cpu", restartPolicy: "NotRequired"}]
  - name: sidecar
    image: logger:v1
    resizePolicy: [{resourceName: "cpu", restartPolicy: "NotRequired"}]

Notice the resizePolicy inside each container: restartPolicy: NotRequired tells the kubelet to attempt a non-disruptive update (no restart).

Step 3: Apply the Pod Definition

Create the Pod using:

kubectl apply -f shared-pool-app.yaml

Wait until the Pod is Running. You can verify its resource allocation:

kubectl describe pod shared-pool-app | grep -A2 "Resource"

Step 4: Resize the Pod-Level Resource Pool Dynamically

To scale the shared pool (e.g., double CPU from 2 to 4), use the resize subresource of the Pod. The patch must target spec.resources:

kubectl patch pod shared-pool-app --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'

This command updates the aggregate CPU limit. The kubelet immediately evaluates the change.

Step 5: Understand the Kubelet's Decision Process

The kubelet performs several checks before applying the resize:

  1. Feasibility: Is the new total available on the node? The kubelet reserves the requested amount and ensures it doesn't overcommit the node's capacity.
  2. Container-Level Resize Policy: For each container, the kubelet reads resizePolicy.resources. If restartPolicy: NotRequired, it attempts to update cgroup limits via the CRI without restarting the container. If RestartContainer, the container is restarted to safely apply the new boundary.
  3. Pod-Level Inheritance: Changes to the Pod-level budget propagate to containers that inherit their limits from that pool. Containers with their own explicit limits are unaffected.

In our example, both containers use NotRequired, so the kubelet updates their cgroup limits on the fly.

Step 6: Verify the In-Place Resize

Use kubectl describe pod shared-pool-app to verify the new limits. Look at the Pod's Allocated Resources section. Also check each container's status:

kubectl get pod shared-pool-app -o json | jq '.status.containerStatuses[].lastState'

If no restart occurred, the lastState will be empty. You can also inspect cgroup files directly on the node (if you have access) to confirm CPU shares increased.

Step 7: (Optional) Monitor Node-Level Safety

The kubelet doesn't blindly apply any resize. It runs a sequence to protect node stability:

You can watch the Pod's conditions for resize-related events:

kubectl get events --field-selector involvedObject.name=shared-pool-app

Tips and Best Practices

This feature greatly simplifies resource management for sidecar‑rich Pods. By adjusting the shared pool on the fly, you can respond to demand spikes without disruptive restarts or manual per‑container recalculation.

Explore

Musk Admits xAI Leveraged OpenAI's Technology to Enhance Grok What You Need to Know About Cricut’s Joy 2 makes creating stickers easier f... Banana Pi BPI-SM10: Tiny RISC-V Compute Module with 60 TOPS AI Power Apple Q2 2026 Earnings Call: Your Guide to Listening Live and Key Expectations Breakthrough: Scientists Reverse Alzheimer’s Memory Loss by Targeting Single Protein