Cloud Computing

How to Configure Tiered Memory Protection in Kubernetes v1.36 with Memory QoS

2026-05-01 05:40:36

Introduction

Kubernetes v1.36 introduces a refined approach to memory management with the Memory QoS feature, now offering tiered memory protection based on Pod quality-of-service (QoS) classes. This update separates memory throttling from reservation, giving you finer control over how the kernel treats container memory under pressure. Whether you're running guaranteed workloads that need ironclad protection or burstable ones that can tolerate some reclamation, the new memoryReservationPolicy field lets you opt into different protection schemes. This guide walks you through the complete configuration process, from enabling the feature gate to verifying behavior using cgroup v2 files and monitoring metrics. By the end, you'll know how to prevent system-wide OOM kills while maximizing resource utilization.

How to Configure Tiered Memory Protection in Kubernetes v1.36 with Memory QoS

What You Need

Step-by-Step Guide

Step 1: Enable the MemoryQoS Feature Gate

Memory QoS is alpha in v1.36, so you must explicitly enable it. Edit your kubelet configuration (or pass a flag) to include:

featureGates:
  MemoryQoS: true

This activates memory.high throttling (default throttle factor 0.9). Note that in v1.36, enabling only the feature gate does not automatically write memory.min or memory.low – those are controlled by the next step.

Step 2: Set memoryReservationPolicy to TieredReservation

To obtain tiered protection, add this field to your kubelet configuration:

memoryReservationPolicy: TieredReservation

If you omit or set it to None, only throttling (via memory.high) applies, and no cgroup reservation files are written. Using TieredReservation tells the kubelet to assign:

Restart the kubelet after changing configuration to apply the new policy.

Step 3: Understand How QoS Classes Map to Protection

With TieredReservation enabled, the values written to cgroup files depend on each Pod’s memory request (not limit). Here’s the exact behavior:

This is a major improvement over v1.27 behavior, where all QoS classes got memory.min, potentially locking too much memory and causing OOM kills.

Step 4: Verify the Cgroup Settings on a Node

After deploying a Pod with a known memory request (e.g., 512 MiB), SSH into a node and check the cgroup path. For example, a Guaranteed Pod:

cat /sys/fs/cgroup/kubepods.slice/kubepods-pod*guaranteed*/memory.min

You should see the value in bytes (e.g., 536870912). For a Burstable Pod, check memory.low. If you see no file or a value of 0, the pod is BestEffort or the reservation policy is not active.

Step 5: Monitor Observability Metrics

Kubernetes v1.36 exposes two new alpha metrics on the kubelet’s /metrics endpoint:

These metrics help you visualize how much memory is “protected” and adjust resource requests accordingly. You can scrape them with Prometheus or use curl locally: curl http://localhost:10250/metrics | grep kubelet_memory_qos (requires authentication).

Step 6: Check Kernel Version Warning for memory.high

The memory.high cgroup file (used for throttling) has known issues on older kernels. In v1.36, the kubelet will log a warning if the kernel version is below 5.11. To check your kernel version, run uname -r on a node. If you see warnings, consider upgrading your kernel to ensure reliable throttling.

Tips and Best Practices

Explore

10 Ways Gemini’s New File Generation Feature Transforms Your Workflow Mastering Ahrefs vs SEMrush: Which SEO Tool Should You Use? Revolutionizing Facebook Groups Search: A Hybrid Approach to Unlock Community Wisdom Supreme Court Deals Blow to Voting Rights, Clears Path for Racial Redistricting GPT-5.5 Hits Microsoft Foundry: Enterprise AI Agents Gain Advanced Reasoning and Autonomous Execution