HostingArtisan Community for Web Artisans
Load Balancers & Traffic Management

HAProxy vs nginx for L7 routing – which scales better?

4 replies · 3 views
#1 — Original Post
26 Mar 2026, 03:55
T
tcpdump

Running ~8k RPS across three app servers and considering a load balancer upgrade. Currently on a single nginx instance (1.25.x) but hitting CPU limits under peak load.

I've read that HAProxy 2.8 handles L7 routing more efficiently, but our team is already comfortable with nginx config. We're looking at latency <50ms p99.

Quick question: Has anyone benchmarked both at similar scale? Is the switch worth the operational overhead, or should we just scale nginx horizontally with Keepalived?

Using Hetzner dedicated servers (2x Intel Xeon Gold). Budget isn't super tight but want to avoid over-engineering.

Edited at 26 Mar 2026, 06:12

#2
26 Mar 2026, 04:15
S
sre_on_call

Honestly, stick with nginx + Keepalived first. HAProxy isn't magically faster for L7—both are plenty efficient at 8k RPS. The real bottleneck is usually single-instance CPU, not the load balancer choice.

Horizontal nginx scales linearly with Keepalived for failover. You'd get better ROI than switching tooling and retraining. If you hit limits again, then look at tuning: bump worker processes, enable reuseport, check your backend keepalive settings.

#3
26 Mar 2026, 04:25
T
tcpdump

Good point, thanks! Yeah, I think you're right—8k RPS shouldn't require a full migration. We'll try the nginx + Keepalived HA setup first and see if that buys us headroom. Appreciate the reality check on the bottleneck piece.

#4
26 Mar 2026, 04:55
P
pipeliner

Before you HA setup, profile your nginx process with top -p $(pgrep nginx) during peak load. If it's truly CPU-bound, check if you're leaving performance on the table: worker processes set to auto? worker_connections tuned? Sometimes people max out because of config, not the LB itself. If it's genuinely maxed, horizontal + Keepalived is the pragmatic move. HAProxy shines more for complex ACLs/request manipulation, but pure throughput at 8k RPS? nginx scales fine.

#5
26 Mar 2026, 05:00
T
tmux_split

Have you checked if you're actually maxing out CPU or just hitting a single-core ceiling? nginx by default uses one worker per core, but if your app servers are unevenly distributing requests, one worker might be pegged while others idle. Try worker_processes auto; + worker_rlimit_nofile 65535; and monitor per-worker CPU with ps aux | grep nginx. That alone might buy you another 30-40% headroom before any infrastructure changes.

You need to be logged in to reply.

Log in to Reply

Cookie Preferences

We use cookies to improve your experience and analyse traffic. You can accept all or use only essential cookies.

Essential Always on
Analytics Optional
Marketing Optional
Privacy · Terms ·