Hetzner Load Balancers vs. a Simple HAProxy Setup

You have a web application running on a couple of Hetzner servers. You need to distribute traffic between them. This is a good problem to have because it means you’re growing. The obvious next step is a load balancer.

Hetzner offers a managed Cloud Load Balancer product. The alternative is to do it yourself with open source software on a cheap cloud instance. The choice between these two isn’t as simple as it looks. It’s a classic engineering trade off between convenience cost and control. Let’s look at the options so you can choose wisely.

The Managed Option Hetzner Cloud Load Balancer

Hetzner’s managed load balancer is exactly what it sounds like. It is a service you configure in the cloud console that distributes traffic to your other servers which they call targets. You pick a size add your servers and it starts working.

The main benefit is that high availability is handled for you. Hetzner runs the infrastructure to make sure the load balancer itself doesn’t go down. If one of their nodes fails another takes over. You don’t have to think about it. Setup is also incredibly simple. It takes just a few clicks in the web interface.

It integrates cleanly with other Hetzner products. You can add servers as targets using their private IP address if you have them connected to a vSwitch. For more on that read Hetzner Private Networking The Simple Way. The load balancer can also handle TLS termination for you which simplifies your backend server configuration.

The downside is cost and control. A managed load balancer is a recurring monthly fee. Depending on the size this fee can easily be more than the cost of a small cloud server. You also give up some control. You are limited to the features and configuration options that Hetzner provides. For most web applications this is fine. But if you need custom routing rules or specific logic you might find it limiting.

The DIY Option HAProxy on a Small Cloud Server

The do it yourself approach is straightforward. You rent the cheapest cloud server you can get for example a CPX11 instance. Then you install HAProxy which is a powerful and widely used open source load balancer.

This approach has one huge advantage cost. A small cloud server is significantly cheaper than the managed load balancer. You get total control. You can configure HAProxy to do almost anything you want. You can write custom rules inspect detailed logs and tune performance to your exact needs. Learning how to configure a tool like HAProxy is also a valuable skill.

The main drawback is that you create a single point of failure. If your single cloud server running HAProxy goes down your entire application is unreachable. For many small to medium sized projects this risk is acceptable. The server is unlikely to fail and if it does you can spin up a new one quickly. You are also responsible for managing this server. You have to handle system updates and secure it properly.

A Simple HAProxy Setup

Setting up HAProxy is easier than you might think. Let’s say you have two web servers on a private network with IPs 10.0.1.10 and 10.0.1.11. You have a new small cloud server for HAProxy that has a public IP and a private IP of 10.0.1.2.

First install the software.

sudo apt update
sudo apt install haproxy

Next you create a simple configuration file. Open /etc/haproxy/haproxy.cfg and replace its contents with this.

global
    log /dev/log local0
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend http_front
    bind *:80
    default_backend http_back

backend http_back
    balance roundrobin
    server web1 10.0.1.10:80 check
    server web2 10.0.1.11:80 check

This configuration is simple. The frontend section listens for incoming traffic on port 80. The backend section defines your two web servers as a pool. The balance roundrobin directive tells HAProxy to send requests to each server in turn. The check option tells HAProxy to monitor the health of the web servers and stop sending traffic to any that are down.

Now enable and start the service.

sudo systemctl enable haproxy
sudo systemctl start haproxy

And that’s it. You have a functioning load balancer. You would then point your domain’s DNS A record to the public IP of this new server. For HTTPS you could use Certbot or Caddy on the load balancer itself.

How to Decide

The decision comes down to your answers to two questions. What is your budget and what is your tolerance for downtime?

You should choose the Hetzner managed Load Balancer if you need guaranteed high availability. If your business would lose significant money from even a few minutes of downtime then the extra cost is worth it. You also choose it if you value convenience and prefer not to manage another server.

You should choose the DIY HAProxy setup if you are sensitive to cost. For many startups and side projects the monthly savings are meaningful. You also choose this path if a small amount of risk is acceptable or if you need the power and flexibility that HAProxy provides. If your load balancer server fails you can get it back online in minutes. For many applications that is good enough.

For most projects I've worked on the self hosted HAProxy setup is the pragmatic choice. It’s cheap surprisingly simple and provides all the control you need. The managed load balancer is the right tool when the cost of potential downtime becomes greater than the monthly fee for the service.

— Rishi Banerjee
September 2025