3 quick steps to optimize the performance of your NGINX server

Jack Wallen offers up three easy to configure tips for helping you get the most out of NGINX.


NGINX is one of the fastest growing web servers on the planet, and with good reason. It's blazing fast, reliable, and very easy to get up and running. But as with every piece of open source software, you can easily tweak it to best fit your needs. I'm going to preface this by saying your mileage may vary—especially depending upon how you use your NGINX server. Even with a wide variety of tasks thrown at NGINX, there are some fairly universal ways you can optimize it as to eke out as much performance as possible.

I have three quick ways you can achieve this. Although these shouldn't be considered a definitive collection (as there are always more things you can do to further optimize), you'll find these three quick tips will give your NGINX server a boost.

With that said, let's make with the tips.

1. Upping worker process and connections

There are two configurations that can give you immediate results:

  • worker_processes - is responsible for letting NGINX know many workers to spawn.
  • worker_connections - tells the worker processes how many unique IPs can simultaneously be served by NGINX.

These are both found in /etc/nginx/nginx.conf. Open up that file for editing with the command:

sudo nano /etc/nginx/nginx.conf 

Out of the box, worker_process is set to auto. You can make that a bit more efficient, but it depends upon how many cores your server CPU has. Your best bet is to set worker_process to 1. However, if you're NGINX installation is going to be handling CPU-intensive work (such as SSL or gzip compression), and your server has multiple cores, you could change that. To find out how many cores your server has, issue the command:

grep processor /proc/cpuinfo | wc -l

The output of that command (Figure A) will tell you how many cores are available on your server.

Figure A

Figure A

A server with 8 cores at the ready.

If you know NGINX is going to be needing a lot of horsepower, set worker_processes to equal the amount of cores on your server. Otherwise, set it to 1.

As for worker_connections, you need to think about this: The maximum amount of clients NGINX can handle is calculated with:

worker_processes * worker_connections = max clients

So if you set worker_process to 1 and worker_connections to 512, you'll only be able to serve 512 clients. If you set worker_process to 2 and worker_connections to 512, you'll be able to server 1024 clients. That's some fairly easy math to figure out. The default value for worker_connections is 768, but I suggest setting worker_connections to 1024. Those values will look like (inside of /etc/nginx/nginx.conf):

worker_processes = 1
worker_connections = 1024

Adjust the above values to suit your needs.

2. Buffer sizes

Another very important optimization comes in the way of buffer sizes. These are also configured within the /etc/nginx/nginx.conf file, but must be placed in a specific location. There are four directives to consider:

  • client_body_buffer_size - handles the client buffer size (POST actions, such as form submissions) sent to NGINX.
  • client_header_buffer_size - handless the client header size.
  • client_max_body_size - maximum allowed size for a client request.
  • large_client_header_buffers - maximum number of buffers for large client headers.

Again, this will vary, depending upon your needs. However, if you open up the NGINX configuration file and place the following within the http {} section, you should be good to go:

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 4k;

Again, if you have bigger needs, you could increase those values.

3. Timeouts

Next we're going to look at a directive that can seriously increase your NGINX server performance. There are two related directives that are responsible for the time a server will wait for either a client body or a client header to be sent after a request. Those two directives are:

  • client_body_timeout
  • client_header_timeout

There are also two other timeout directives:

  • keepalive_timeout - assigns the timeout value for keep-alive connections with the client.
  • send_timeout - sets a timeout for transmitting a response to the client.

We're going to set these options in the same configuration file. Out of the box, those values are pretty high. Let's take those down a notch with the following configurations:

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

Save and close the NGINX configuration file. Restart the web server with the command sudo systemctl restart nginx. You should now see a noticeable improvement in NGINX performance.

So much more to do

NGINX offers quite a bit of options to help optimize your server, but these three general tips should go a long way to improving basic performance. We'll come back to this topic again to get even more performance from that server.

Also See

About Jack Wallen

Jack Wallen is an award-winning writer for TechRepublic and He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website

Editor's Picks

Free Newsletters, In your Inbox