When Microservices Systems Start to Scale
During the process of upgrading infrastructure from CentOS 7 to CentOS Stream 9, I realized that installing the OS is the easy part—managing traffic is the real challenge. When you have dozens of Node.js, Python, or Go services running on scattered ports like 3000, 8000, or 5000, exposing those ports directly to clients is a major security risk.
This is where Nginx acts as a reliable gatekeeper. After six months of running it in production with roughly 50,000 requests per minute, I’ve found Nginx on CentOS Stream 9 to be incredibly stable. It doesn’t just mask your backend; it handles Load Balancing and SSL encryption, freeing up your backend servers to focus on business logic.
Why Choose Nginx Over Apache?
Many developers still prefer Apache due to familiarity. However, if your priority is handling thousands of concurrent connections with extremely low RAM usage, Nginx remains the “king.” On CentOS Stream 9, Nginx integrates seamlessly with dnf and systemd. Notably, its combination with SELinux creates a very tight security layer for production environments.
The workflow is simple: User → Internet → Nginx (Port 80/443) → Backend Services. Nginx sits in the middle, acting as the orchestrator for all data flow.
Step 1: Installation and Environment Preparation
First, update the system to ensure all security libraries are up to date.
sudo dnf update -y
sudo dnf install nginx -y
Enable the service so Nginx starts automatically on system boot:
sudo systemctl enable --now nginx
sudo systemctl status nginx
Don’t forget to open the Firewall. On CentOS Stream 9, this is the most common oversight that prevents website access even when the service is running.
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
Step 2: Practical Reverse Proxy Configuration
Instead of modifying the main nginx.conf file, I usually create separate files in /etc/nginx/conf.d/. This approach allows you to manage dozens of different domains without overlapping configurations.
For example, to route traffic to a Node.js app on port 3000, create the file /etc/nginx/conf.d/app.conf:
server {
listen 80;
server_name app.itfromzero.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Pro tip: The proxy_set_header lines are crucial. Without them, the backend will incorrectly perceive all requests as coming from the internal IP 127.0.0.1, breaking logic related to identification or rate limiting.
Step 3: Optimizing Load Balancing for High Traffic
When user traffic spikes, a single backend can easily become overloaded. This is where the upstream directive comes into play, allowing Nginx to automatically distribute requests across a list of specified servers.
upstream backend_servers {
least_conn; # Prioritize the server with the fewest connections
server 192.168.1.10:3000 weight=3;
server 192.168.1.11:3000;
server 192.168.1.12:3000 backup;
}
server {
listen 80;
server_name app.itfromzero.com;
location / {
proxy_pass http://backend_servers;
}
}
In practice, I prefer least_conn over the default Round Robin. It handles requests more intelligently when workloads vary in intensity. Additionally, the weight parameter helps you leverage servers with more powerful CPU/RAM resources.
Step 4: Install Let’s Encrypt SSL in 30 seconds
Running a website without HTTPS today is virtually impossible. Nginx will handle the SSL decryption (SSL Termination), saving your backend approximately 20-30% in CPU resources by offloading the encryption work.
Using Certbot is the fastest way to automate certificates:
sudo dnf install epel-release -y
sudo dnf install certbot python3-certbot-nginx -y
sudo certbot --nginx -d app.itfromzero.com
Certbot will automatically modify the Nginx configuration and set up a renewal mechanism. You no longer need to keep track of certificate expiration dates.
Resolving “502 Bad Gateway” Errors Caused by SELinux
This is the most frustrating part for CentOS newcomers. Even if Nginx is configured correctly, the Firewall is open, and the backend is running, you might still get a 502 error. The Nginx logs will show “Permission denied.”
The main culprit is SELinux. By default, it blocks Nginx from making outbound network connections. To resolve this permanently, run the following command:
sudo setsebool -P httpd_can_network_connect 1
The -P flag ensures this configuration persists after a server reboot. Never disable SELinux (setenforce 0) as it would weaken one of your system’s most vital defense layers.
Conclusion
Nginx on CentOS Stream 9 is incredibly powerful if you master upstream configurations and SELinux management. Decoupling the Proxy layer makes your infrastructure more flexible, allowing you to easily maintain old servers or add new backends without touching application code. One final note: Always run nginx -t before restarting to avoid system downtime due to syntax errors.

