Nginx and Rails: Key Nginx Settings For Production

| Shey Sewani | Toronto

I’ve always been a fan of Nginx. It’s fast, stable, feature-rich, and challenging to configure at times. I learned most of what I know about Nginx by reading other configs and making a few mistakes in production (thankfully, we have config management to handle those moments).

In this post, I want to share the nginx config and the customizations I use for my Rails app, HTTPScout.io. While the config isn’t perfect, it’s been reliable in production and handles traffic spikes well. There are a few quirks, but overall it’s a solid setup that has worked for me.

Rate Limiting

I use a dual-zone approach to rate-limiting that doesn’t completely prevent DoS attacks but helps mitigate the risk of cascading failures during legitimate traffic surges. The per-second limit absorbs brief spikes without blocking users, while the per-minute limit manages sustained high traffic.


################################################################################
## Rate limiting Zone Definition
################################################################################
limit_req_zone $binary_remote_addr zone=zone_request_limit_second:10m rate=15r/s;
limit_req_zone $binary_remote_addr zone=zone_request_limit_minute:10m rate=45r/m;

################################################################################
## Rate limiting
################################################################################
limit_req zone=zone_request_limit_second burst=20 nodelay;
limit_req zone=zone_request_limit_minute burst=30 nodelay;
limit_req_status 429;

Avoiding Sending Unnecessary Traffic to Rails

Since Nginx is much faster than Rails at serving static assets, I configure nginx to handle those requests directly. And by filtering out certain types of traffic at the Nginx level, I avoid sending requests to Rails that it can’t process. This keeps the app responsive, even during traffic spikes.


# Avoid sending unnecessary requests upstream to the app.
location ~ (\.php|\.aspx|\.asp|myadmin) {
  return 404;
}

# Serve robots.txt, favicon, etc., directly from nginx
location ~ ^/(robots.txt|sitemap.xml.gz|favicon.ico) {
  root /home/rails/lrt/current/public;
}

# Serve precompiled assets directly
location ~ ^/(assets)/ {
  root /home/rails/lrt/current/public;
  expires max;
  # browser cache only
  add_header Cache-Control private;
}

Avoid Buffering

By default, nginx is configured for lower memory usage, so if a response is too large to fit in memory, nginx will write the excess data to disk to conserve memory. This is buffering and it’s a common issue for apps that serve API requests. To avoid buffering responses, I set proxy_buffers 4 256k, in this case, nginx is configured to hold up to 1MB of the response in memory. When combined with proxy_buffering off, nginx will send responses directly to the client, improving overall response times by skipping more unecessary disk I/O.


#########################################################
## Buffers
#########################################################
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_buffering off;

SSL/TLS

My TLS configuration follows Mozilla’s guidelines for a modern and secure setup, which also allows the site to score an “A” rating from SSL Labs. I’ve enabled OCSP stapling for a slight performance boost as well.


sl_protocols                    TLSv1.3;
ssl_session_cache               shared:SSL:10m;
ssl_session_timeout             10m;
ssl_certificate                 /etc/letsencrypt/live/httpscout.io/fullchain.pem;
ssl_certificate_key             /etc/letsencrypt/live/httpscout.io/privkey.pem;
ssl_session_tickets             off;
ssl_prefer_server_ciphers       off;
ssl_stapling                    on;
ssl_stapling_verify             on;

Logging

To make logs easier to parse and search, I’ve implemented JSON logging, which is particularly useful for troubleshooting and gathering statistics. This was something I picked up from velebit.ai.


# Modified version of JSON logging: https://www.velebit.ai/blog/nginx-json-logging/
log_format custom escape=json '{"source": "nginx", "time": "$time_iso8601", "resp_body_size": $body_bytes_sent, "host": "$http_host", "address": "$remote_addr", "request_length": $request_length, "method": "$request_method", "uri": "$request_uri", "status": $status, "user_agent": "$http_user_agent", "referrer" : "$http_referer", "resp_time": "$request_time", "upstream_addr": "$upstream_addr"}';

Final Thoughts

This Nginx config has served me well in production, and while it might need slight adjustments depending on your environment, it should be a solid starting point for most setups. If you’re automating Certbot deployment, don’t forget to copy the nginx-reload.sh script to the LetsEncrypt renewal hooks directory. This makes sure that Nginx reloads with the new certs when they’re updated.

Anyways, I hope you find this config helpful!