Last updated: February 5, 2024
HAProxy (high availability proxy) is a critical part of modern systems infrastructure. It’s ideally the first point of contact for users who access your application. When configured correctly, HAProxy improves your app’s performance significantly. Through load balancing, HAProxy makes sure each service your application depends on is accessible to users, especially under load conditions otherwise impacting application performance.
I’m going to take you through the process of tuning timeouts with the intent to boost application performance. You’ll see how robust HAProxy logging can help you with troubleshooting timeout issues and improve the performance of your application. I’ll quickly go through some of the HAProxy timeout configurations to lay a foundation.
Before we dive into the overview, let’s go over a few reasons why we need HAProxy and the logic behind it. This should help us visualize the “how” part later and understand why it’s worth going through the tuning processes.
Why You’ll Need HAProxy
The image above shows the basic design of how users access web. This method works fine if the application doesn’t get much traffic. Once the application gains traction and the number of users increases, you can see application performance begin to decline. When numerous users access the application at the same time, requests can back up and even overwhelm the application.
A good analogy is a single-lane road going from point A to point B filling up when there are too many cars. Adding HAProxy as a load balancer is like adding lanes to the road. More lanes mean more vehicles can now travel the road.
Additional lanes won’t increase speed, but cars travel down the road without waiting for the car ahead to advance. However, if you add enough vehicles, even these additional lanes will become congested. This is where the concept of timeouts is essential to avoid jams.
Timeouts terminate connections after a client waits for a predetermined amount of time to access the server. This frees up connections, so active users can access the application. It’s as if the stalled cars are taken off the road, so other cars can move freely. Let’s quickly cover the various types of timeouts before we get to the tuning part.
The Three Basic HAProxy Timeouts
Just to show you what timeout configurations look like, here’s a sample including the three basic timeouts.
##based on Mesosphere Marathon’s servicerouter.py haproxy config
global
daemon
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
tune.ssl.default-dh-param 2048
defaults
log global
retries 3
maxconn 2000
timeout connect 5s
timeout client 50s
timeout server 50s
Source: GitHub
1. Timeout Client
The <timeout client> setting defines the maximum time a client can be inactive when connected to the server. A common value for this timeout is five minutes. You can go with shorter timeouts, even as little as thirty seconds, if you’re attempting to maximize security or the total number of active connections. As the name suggests, this timeout is handled on the client side.
2. Timeout Connect
The <timeout connect> works like a grace period. It sets the maximum time the client has to connect to a server. The time it takes a client to connect to a server varies widely, depending on network complexity. The more complex a topology, the longer it can take the client to connect.
The timeout connect allows the client to try to connect again if the initial attempt fails. In addition to the connection time, you’ll need to set the numbers of retries. The default is three, but you can adjust it to fit your environment.
3. Timeout Server
When a client sends a request to the server, it expects a response. If the server doesn’t respond in the configured time duration, a <timeout server> is invoked. This is akin to the <timeout client>, only in reverse. You can see in the list of HTTP responses, if a <timeout serve> is invoked, you’ll get a 504 Gateway Timeout response from HAProxy.
HAProxy Timeout Tuning for Good Performance
Just by configuring these three timeout values in your haproxy.cfg file, you can achieve a basic level of performance. If you want to take it up a notch, you can set other timeout settings to enhance performance. While the values you set will vary depending on your traffic load and environment, I’ve listed the most common configurations below. To use these, append the following to your configuration file.
timeout http-request 10s
timeout http-keep-alive 2s
timeout queue 5s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
Timeout HTTP-Request
The <timeout http-request> variable limits the total time each request can persist. Aside from optimizing request performance, it can also defend against denial of service (DDoS) attacks by limiting the time a single request can last. Usually, ten seconds is a good limit.
Some php.ini files may also have this setting, but since the proxy server is the first point of contact with an application, php.ini settings are overridden. This is true unless the application server level (php.ini) setting is shorter than the proxy determined variable.
Timeout HTTP-Keep-Alive
As the name suggests, this is a timeout designed to keep a single connection between the client and the server “alive” for a desired amount of time. While the connection is alive, all data packets can pass without needing to ask for a connection again. This essentially makes getting responses from the server to the client faster.
Keep in mind the <timeout http-request> regulates how long a single request can persist, so these two settings work hand in hand. If the <timeout http-keep-alive> isn’t set or has a value less than the <timeout http-request>, the latter determines the connection status.
Timeout Queue
The <timeout queue> limits the number of concurrent connections, which can also impact performance. Setting the queue timeout shortens wait times by limiting connections and allowing clients to try connecting again if the queue is full. This is similar to <timeout connect>, except <timeout queue> limits the number of connections.
If you don’t set the <timeout queue>, HAProxy will default to the <timeout connect> settings to manage the queue.
Timeout Tunnel
The <timeout tunnel> variable only applies when you’re working with WebSockets. Essentially, it’s <timeout keep-alive> on steroids, often with durations exceeding minutes. It may seem counterproductive and a potential security risk to keep a connection open for that long. However, when used with other timeout configurations, it’s possible to maintain a safe yet high-performing connection.
For instance, the <timeout http-request> variable would prevent an attack even if the <timeout tunnel> is set at several minutes. Remember, though, that the virtual tunnel you create by implementing this timeout requires you to terminate it at some point using the <timeout client-fin> parameter.
Timeout Client-Fin
Say a connection drops in the middle of a client request; if you look at the HAProxy logs, you’re likely to see the lost connection is a result of client-side network issues. To handle these types of situations, HAProxy creates a list of dropped client-side connections. The <timeout client-fin> limits the amount of time a client request will be maintained on this list.
This parameter starts ticking when a connection joins the list. Without it, the server will maintain a “maybe they’ll return” sort of connection while others are denied service. To optimize performance, the time values set for this timeout are usually short.
Timeout Server-Fin
Much like the <timeout client-fin> concept, abrupt disconnections can also occur on the server side of the application. An optimal setup would include redundant servers for load-balancing. When a server has too many requests, redundancy will allow you to reroute overflow requests to less busy servers and speed up response times.
The <timeout server-fin> limits the time the client waits for a server response before an alternate server is queried.
HAProxy Logging: How to Determine Perfect Timeouts
By now, you probably can see how environment variables and traffic can help you determine the time allocations for the timeouts above. Finding the perfect balance between peak performance and optimal security is a matter of trial and error. Keeping a close eye on HAProxy logs can help you see the interactions between these different configurations and find the “sweet spot” for your application and environment. You can use HAProxy logs to understand:
- Timestamped metrics about traffic (timing data, connections counters, traffic size)
- Detailed event logs of HAProxy operations (content switching, filtering, persistence)
- Request and responses (headers, status codes, payloads)
- Session terminations and tracking where failures are occurring
Knowing when a timeout event occurs and monitoring the events preceding it are the first steps for successfully troubleshooting and tuning timeout settings. If you’re looking for an easy cloud-based log management tool, check out SolarWinds® Papertrail™. Built by engineers for engineers, Papertrail can automatically parse HAProxy logs and help you quickly troubleshoot timeout issues.
It offers a simple search syntax allowing you to search all your logs from a central interface, see events in context, and pinpoint issues. The live tail feature is particularly helpful for real-time troubleshooting. If you want to start HAProxy logging with better results, sign up for a trial or request a demo.
This post was written by Taurai Mutimutema. Taurai is a systems analyst with a knack for writing, which was probably sparked by the need to document technical processes during code and implementation sessions. He enjoys learning new technology and talks about tech even more than he writes.