HostingArtisan Community for Web Artisans
Kubernetes (K8s) Orchestration

K8s 1.29 cluster keeps evicting pods on 3rd node, but CPU/mem look fine?

4 replies · 2 views
#1 — Original Post
26 Mar 2026, 17:20
E
ext_guru

hey folks, running a 3-node Hetzner setup with K8s 1.29. everything's been stable for 6 months but last week pods started getting evicted from node-3 constantly. the weird part: kubectl top nodes shows like 40% CPU and 55% memory. no disk pressure either.

i've checked kubelet logs and it's hitting some kind of threshold but the metrics don't match what i'm seeing in prometheus. anyone dealt with this? thinking maybe it's a resource request/limit config issue on my deployments or maybe kubelet is miscalculating something?

using kubeadm for setup, pretty vanilla config. happy to share manifests if someone wants to take a look

Edited at 26 Mar 2026, 18:52

#2
26 Mar 2026, 17:30
J
journalctl

Check if your resource requests are set correctly on those deployments. Kubelet does evictions based on requests, not actual usage—so if pods have tiny requests but use more memory in practice, it can trigger eviction even though kubectl top shows plenty of headroom. Also run kubectl describe node node-3 and look for the eviction thresholds and current allocatable resources. Post that output and we can see if the math is actually adding up.

#3
26 Mar 2026, 17:35
E
ext_guru

ah good point, I hadn't thought about that distinction! Yeah looking at my deployments now and I think that's it—most of them have like 256Mi requests but are actually using way more. Let me adjust those and see if the evictions stop. Thanks!

#4
26 Mar 2026, 17:45
I
istio_mesh

also check kubectl describe node node-3 for any allocatable resource changes or taints that got added. i've seen Hetzner nodes randomly report different allocatable capacity after kernel updates or cloud-init runs, which throws off kubelet's accounting even though actual resources look fine. run journalctl -u kubelet -n 100 on that specific node and grep for 'eviction' or 'memory.available'—the actual threshold kubelet is hitting should be in there.

#5
26 Mar 2026, 18:10
F
fiber_patch

also worth checking: run kubectl get events -n <namespace> --sort-by='.lastTimestamp' and grep for the eviction events. the reason field will tell you exactly what triggered it—MemoryPressure vs PIDPressure vs something else entirely. I've seen cases where it looks like memory but it's actually hitting the node's max PIDs limit, especially on Hetzner's smaller instances.

You need to be logged in to reply.

Log in to Reply

Cookie Preferences

We use cookies to improve your experience and analyse traffic. You can accept all or use only essential cookies.

Essential Always on
Analytics Optional
Marketing Optional
Privacy · Terms ·