Supercharge Website Speed with Varnish Cache and Nginx on CentOS Stream 9

CentOS tutorial - IT technology blog
CentOS tutorial - IT technology blog

Boost Website Speed with Varnish Cache: A Practical Solution for CentOS Stream 9

A slow-loading website is the fastest way to lose users and get downranked by Google. I once managed a high-traffic e-commerce system running on CentOS 7. Despite optimizing Nginx and PHP-FPM, the server would still struggle during peak sale seasons. After placing Varnish Cache in front of Nginx, the results were night and day. Server CPU usage dropped from 80% to just 5-10%, while the Time to First Byte (TTFB) was reduced to milliseconds.

Here is how I implemented this model on CentOS Stream 9. It’s currently the most stable choice to replace the now-discontinued CentOS 7.

Quick Start: 5-Minute Setup

If you have a fresh CentOS Stream 9 server, run the following dnf install commands to see immediate results.

Step 1: Install Nginx and Varnish

# Install EPEL repository
sudo dnf install epel-release -y

# Install Nginx and Varnish
sudo dnf install nginx varnish -y

# Enable services
sudo systemctl enable --now nginx
sudo systemctl enable --now varnish

Step 2: Move Nginx to Port 8080

Typically, Nginx occupies port 80, but we need to reserve this for Varnish. Open your Nginx configuration file and update the port:

sudo sed -i 's/listen       80;/listen       8080;/g' /etc/nginx/nginx.conf
sudo systemctl restart nginx

Step 3: Configure Varnish to Listen on Port 80

To have Varnish act as the “frontline,” you need to change its default port:

sudo systemctl edit --full varnish.service

Find the ExecStart line and change port 6081 to 80. Keep the other parameters as they are:

ExecStart=/usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,256m

Finally, reload the configuration and restart the service:

sudo systemctl daemon-reload
sudo systemctl restart varnish

Now, Varnish is ready to receive requests from users and forward them to Nginx on port 8080 when necessary.

Why Does This Model Reduce Server Load?

Varnish Cache acts as an ultra-fast buffer. Instead of Nginx constantly calling PHP to process requests and querying the database, Varnish stores a copy of the results in RAM. When the next user visits, Varnish serves the result instantly. This allows the system to handle tens of thousands of requests per second without needing expensive hardware upgrades.

Optimizing Logic with VCL (Varnish Configuration Language)

The true power of Varnish lies in the /etc/varnish/default.vcl file. Here is the configuration I typically use to optimize news or e-commerce websites:

vcl 4.1;

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {
    # Only cache GET and HEAD requests for safety
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);
    }

    # Strip cookies for static files to increase Cache Hit Rate
    if (req.url ~ "\.(jpg|jpeg|gif|png|css|js|ico|gz|svg|woff2)$") {
        unset req.http.Cookie;
        return (hash);
    }

    # Bypass cache for admin pages
    if (req.url ~ "^/wp-admin" || req.url ~ "^/wp-login.php") {
        return (pass);
    }
}

A small note: You must pay close attention to Cookies. If not handled carefully, Varnish might mix up login sessions between different users.

Solving the HTTPS Problem (SSL Termination)

Varnish has one weakness: it doesn’t support the HTTPS protocol directly. To solve this, we use a “Sandwich” model: Client (443) -> Nginx (SSL Decryption) -> Varnish (80) -> Nginx (8080).

Don’t worry too much about the complexity; the Nginx configuration for port 443 is quite simple:

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:80; # Forward decrypted requests to Varnish
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Real-World Operational Tips

1. Monitor Hit Rate: Use the varnishstat command regularly. A well-configured system typically has a main_cache_hit rate of over 80%. If this number is too low, you should review your Cookie-stripping logic in the VCL file.

2. Handle SELinux Barriers: SELinux is very strict on CentOS Stream 9. If Varnish cannot connect to the backend, don’t rush to disable SELinux. Instead, run the command: setsebool -P httpd_can_network_connect 1 to allow them to communicate.

3. Cache Invalidation (Purge): When you update a post, Varnish will still display the old content. You should install a plugin (like Proxy Cache Purge for WordPress) to automatically clear the cache as soon as you hit the “Update” button.

Combining Varnish and Nginx is like adding a turbocharger to an engine. It not only makes your website faster but also acts as a shield for your server against sudden traffic spikes. Good luck with your implementation!

Share: