One of the reasons you’d get a “502 Bad Gateway” page with NGINX lies behind the following error:
upstream sent too big header while reading response header from upstream
The error is typical in a scenario where you use NGINX for proxying requests elsewhere. A particular case of this is terminating SSL with NGINX and proxying requests to a Varnish server.
As always, you’d head to Google searching for solutions, only to find the wrong advice.
Bad tuning for proxy_buffer_size
Without giving any rationale, every blog seems to copy paste essentially the same thing from each other:
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;
Read on why this is wrong and the right way to tune the only essential parameter for fixing the underlying error above.
What is proxy_buffer_size
The proxy_buffer_size
is the only directive that needs tuning in order to solve the upstream sent too big header while reading response header from upstream error.
It defines the amount of memory that NGINX will allocate for each request to the proxied server. This small chunk of memory will be used for reading and storing the tiny fraction of response – the HTTP headers.
The HTTP headers typically correspond to a very small, header portion of the complete request or response payload. An example of HTTP response headers:
HTTP/2 200
date: Sat, 07 Jul 2018 20:54:41 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding, User-Agent
accept-ranges: bytes
strict-transport-security: max-age=31536000; includeSubDomains; preload
As you see, it doesn’t make any sense to allocate as much as 128 kilobytes towards the buffer of HTTP response headers.
The defaults are 4k or 8k depending on a platform, which is more than enough for buffering HTTP headers of response for a typical web application.
When proxy_buffer_size need to be lifted
When you get the aforementioned error, the reason is what is says: NGINX doesn’t have enough in-memory storage allocated for transient storage of HTTP headers from the upstream server. So the upstream server sends a response with a rather large set of HTTP headers which exceeds the default 4k or 8k. But what’s the proper value?
The proper value for proxy_buffer_size
should be no less than the maximum possible size of response HTTP headers coming from your particular application.
The maximum size of HTTP response headers is typical with the authentication/authenticated requests. This is when the application sends a series of long Set-Cookie
headers. (which may or may not be an issue with headers duplication with some apps).
Calculate proper value for proxy_buffer_size
Now you may be wondering how to know exactly what value should be set for in your particular app. Usually, the HTTP headers become bloated due to multiple Set-Cookie
headers issued by login pages. There’s no automated way to log the size of all response headers and get the maximum value. But you can test your login page manually and thus, get the theoretically largest chunk of HTTP headers your app may emit.
The following command gets the size of HTTP response headers:
curl -s -w \%{size_header} -o /dev/null https://example.com
Now, the headers might be slightly different, so you must issue the command against the upstream server instead. Provided that we’re doing SSL termination, the upstream would be a Varnish cache instance, listening on port 80. So:
curl -s -w \%{size_header} -o /dev/null http://localhost -H "Host: example.com"
This, however, may not trigger Set-Cookie
being sent by the upstream, so you should adjust the URL with the necessary request method (POST
), request Cookie
or authentication credentials. You may also want to supply -H "Accept-Encoding: gzip"
to emulate browsers. And to expand on that you may want to use Chrome’s developer console to copy-paste curl parameters for request that triggers the error and append them to the base command above.
Either way, the result of the command will be the total number of bytes: e.g. for a Woocommerce powered website with duplicate Set-Coookie
headers, I got a value of 1500. So there was no need to adjust/fix anything. In fact, I could even lower the value to improve NGINX memory consumption.
For a different website, you might have gotten a different result, e.g. 9000. This is where you need to drop in the proxy_buffer_size
directive to your NGINX configuration. The value should be aligned with memory page size, e.g. 4k for most platforms (can be confirmed with getconf PAGESIZE
). So aligning 9000 to 4k chunks, we get 12k:
proxy_buffer_size 12k;
Talking about improved memory consumption: most folks would likely configure proxy buffering and buffer sizes globally, it’s interesting to note that buffering can be configured per server block and even per location block, giving you ultimate flexibility. So you may want to keep a low buffering value in general and only raise it for locations that are known to leverage Set-Cookie
headers extensively.
It’s also worth noting that other proxy buffering sizes are loosely connected to proxy_buffer_size
. They are meant for buffering response body.
Disable proxy buffering?
There are only a few cases, where you would want to disable proxy buffering altogether. And even then, NGINX still has to allocate the required proxy_buffer_size
chunk of memory for reading HTTP headers returned by upstream server.
With proxy_buffering
disabled, data received from the server is immediately relayed by NGINX, allowing for minimum Time To First Byte (TTFB). If TTFB is your goal, make sure that tcp_nodelay
is enabled (default) and that tcp_nopush
is disabled (default).
The amount of data that is always buffered from the response is controlled by proxy_buffer_size
– the only relevant proxy buffering directive when proxy buffering is disabled. Turning off proxy buffering doesn’t mean that proxy_buffer_size
has no effect. In fact, proxy_buffer_size
is the required chunk of memory which is used by NGINX no matter what.
And finally, going back to where it all started: you might consider disabling proxy_buffering
for SSL termination. If NGINX is used for SSL termination, you don’t really need proxy buffering done to the upstream server:
- if the link between NGINX and upstream server isn’t a “slow one”
- the upstream can handle slow clients
- keeping both NGINX and upstream busy while serving a slow client will not consume extensive resources (RAM)
Buffering is needed to ensure that the upstream server can be set free after delivering the response to NGINX, and NGINX will proceed to deliver the response to the slow client.
But with a LEMP setup, with, or without Varnish, the only backend you really want to be free as soon as possible is PHP-FPM, but that has a different, and a similar story with a bunch of fastcgi_buffer*
directives to your service.
proxy_busy_buffers_size
We have to mention this directive here because it is tightly connected to the proxy_buffer_size
.
The proxy_busy_buffers_size
specifies the size of buffers, in kilobytes, which can be used for delivering the response to clients while it was not fully read from the upstream server.
Essentially, its minimum allowed value equals to the value you’d set for proxy_buffer_size
. Thus, if you’re tuning up proxy_buffer_size
to a higher value, you need to raise proxy_busy_buffers_size
to at least the same value, e.g.:
proxy_buffer_size 16k;
proxy_busy_buffers_size 16k;
# proxy_buffers 8 4k; implied default
The proxy_busy_buffers_size
must be equal to or greater than the maximum of the value of proxy_buffer_size
and one of the proxy_buffers
.
What this would mean in our tuned config above, is that with the implied default of proxy_buffers 8 4k;
, NGINX is essentially using at most 48k (32k for body and 16k for headers) for buffering response from the upstream, out which 16k are allocated for busy buffers, allowing it to spit the response back to the client while yet to receive full response from the upstream.
The maximum value for proxy_busy_buffers_size
must be less than the size of all proxy_buffers
minus one buffer. The remaining buffer has to be kept so that NGINX is free to use it for reading the response from upstream.
Increasing proxy_busy_buffers_size
further near to its upper cap value will allow it to be more responsive in relation to the client, at a cost of higher CPU use per connection, because there might be fewer free buffers for reading the remaining response, at a given time.
The default value for proxy_busy_buffers_size
is calculated dynamically during configuration evaluation state. It’s set to twice the size of either:
- one of the buffers set by
proxy_buffers
, if those buffers are smallest in comparison toproxy_buffer_size
one - value of
proxy_buffer_size
, otherwise
This automatic configuration is error-prone if it results in the calculated value falling outside the allowed ranges.
To understand the best value for it, we have to touch on proxy_buffers
first.
proxy_buffers
The rule of thumb with this setting is that while we make use of buffering, it is best that the complete response from upstream can be held in memory, to avoid disk I/O.
If the response from upstream arrives fast, and the client is slower, NGINX preserves the response in buffers, allowing it to close the potentially expensive upstream connection quickly.
If the allocated buffers size does not allow storing the entire response in memory, it will be stored to disk (slower).
The default for this directive is:
proxy_buffers 8 4k|8k;
Tuning this one depends on a few factors, primarily:
- the “body” response size of your app
- whether you always request compressed resources from upstream (and it supports emitting compressed responses) or not
In a typical scenario, NGINX can request gzipped/non-gzipped pages from upstream, depending on the client’s preference/support, with gzip happening in the upstream.
With this case in mind, NGINX can receive a potentially large chunk of data/HTML that is not compressed and thus can easily exceed 32k|64k buffers.
To determine the size of HTML/data returned by a given resource, you can use:
curl -so /dev/null https://example.com/ -w '%{size_download}'
Set proxy_buffers
in a way that it equals to the total maximum size of response data.
For example. if our uncompressed body size is at most 512K (a mighty Magento 2 theme with lots of HTML tags), then we have to set 512K worth of buffer size = 128 4k-sized buffers.
Tip: if you ensure compression always happens in the upstream, and unzip in NGINX for non-supporting clients, you can reduce those buffers greatly (hint at gunzip module)
To determine the best value for fastcgi_buffers
when you always receive compressed data from upstream, you can use:
curl --compressed -so /dev/null https://example.com/ -w '%{size_download}'
Back to proxy_busy_buffers_size
. It makes sense to allow for at least some body-specific buffers to be busy, in order to be quickly sent to the clients as soon as they are filled. So you may want to have its size “cover” both the entire header buffer and at least some portion of the body buffers.
Say, we go back to our Magento 2 with 16K max header size and 512K max response body size.
And compression by upstream is optional (can get an uncompressed response).
A good starting point for proxy_busy_buffers_size
is 16K + 4K * 2 = 24k. That is the entire size of the maximum header size plus two smaller “body” buffers.
Recommendation:
proxy_buffering off; # for a single server setup (SSL termination of Varnish), where no caching is done in NGINX itself
proxy_buffer_size 16k; # should be enough for most PHP websites, or adjust as above
proxy_busy_buffers_size 24k; # essentially, proxy_buffer_size + 2 small buffers of 4k
proxy_buffers 64 4k; # should be enough for most PHP websites, adjust as above to get an accurate value
There are apps out there that require the use of many HTTP headers or just a few. In many cases, you will be just fine with the default proxy_buffer_size
‘s 4k | 8k , and for some apps, it has to be adjusted, so your mileage might vary. There’s no perfect value for proxy_buffer_size
that fits all. This is why this parameter is essential for fine-tuning your entire NGINX powered stack.