The Story of Logs and the SSH Nightmare
We’ve all been there: your app throws a 500 error, and the first thing you do is open a terminal, SSH into the server, and tail -f through a mess of log files. This works fine for one or two servers. But as your system grows into dozens of VPS nodes running microservices, jumping between windows to find a root cause becomes a real nightmare.
I used to manage a 15-server cluster with Prometheus and Grafana. While this setup is great for quick detection, it only tells you when something is wrong (like a CPU spike to 90%). To understand why an app crashed, I still had to manually dig through logs. That’s when I realized I needed a proper Centralized Logging system.
Why Choose Fluentd and OpenSearch Over ELK Stack?
When it comes to logging, the ELK Stack (Elasticsearch – Logstash – Kibana) is the gold standard. But there’s a catch: ELK is resource-heavy. Logstash runs on the JVM, and just starting it up can consume 500MB to 1GB of RAM.
After struggling to maintain ELK on cheap VPS instances, I switched to the Fluentd and OpenSearch combo. Here’s why it’s a better choice:
- Fluentd: Written in C and Ruby, it’s incredibly lightweight. It only uses about 40MB – 100MB of RAM to process the same log volume as Logstash.
- OpenSearch: An open-source fork of Elasticsearch. It retains that blazing-fast search power but with a more friendly license (Apache 2.0).
- Ecosystem: Fluentd boasts over 500 plugins. You can route logs anywhere—from S3 to Slack or Telegram—with just a few lines of config.
System Preparation
To give you a clear picture, we’ll use Docker Compose to spin up a practical environment consisting of:
- OpenSearch: The heart for data storage and search.
- OpenSearch Dashboards: A visual interface for viewing logs (replacing Kibana).
- Fluentd: The agent that collects and ships logs to the storage engine.
Installing OpenSearch and Fluentd with Docker Compose
First, create a project directory. I prefer Docker because scaling or migrating servers is as simple as copying the folder and running it.
version: '3'
services:
opensearch:
image: opensearchproject/opensearch:latest
container_name: opensearch
environment:
- cluster.name=opensearch-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Limit to 512MB RAM for efficiency
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:latest
container_name: opensearch-dashboards
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
fluentd:
build: ./fluentd
container_name: fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
links:
- "opensearch"
ports:
- "24224:24224"
- "24224:24224/udp"
Important Note: The base Fluentd image doesn’t include the OpenSearch plugin by default. We need to create a custom Dockerfile to install it:
# Directory ./fluentd/Dockerfile
FROM fluent/fluentd:v1.16-debian-1
USER root
RUN gem install fluent-plugin-opensearch
USER fluent
Configuring Fluentd to “Ingest” Logs
This is where we define the data flow. We’ll instruct Fluentd to listen on port 24224 and forward logs to OpenSearch. Create the ./fluentd/conf/fluent.conf file:
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type opensearch
host opensearch
port 9200
scheme https
user admin
password admin
ssl_verify false
logstash_format true
logstash_prefix itfromzero-logs
flush_interval 5s
</match>
In this configuration, I’ve set flush_interval 5s. This ensures logs are pushed almost instantly, allowing for real-time debugging without the wait.
Verifying the Results
Run docker-compose up -d and wait about a minute. Access http://your-ip:5601 using the admin/admin credentials.
To see your logs, follow these 3 steps:
- Go to Stack Management -> Index Patterns.
- Create a new pattern:
itfromzero-logs-*. - Open Discover to see your logs in action.
Let’s try sending a simulated log entry:
echo '{"message": "Test log from IT From Zero", "level": "info"}' | fluent-cat debug.test
The log line will appear on the web interface instantly. This feeling is far more satisfying than staring at a screen while running grep through thousands of lines of black-and-white text.
Hard-Learned Lessons from the Field
While deploying this system, I picked up a few valuable lessons for you:
- Buffer is your insurance: Always configure a
<buffer>. If OpenSearch goes down temporarily, Fluentd will store logs on the disk and resync them later, preventing the loss of critical data. - Permission Errors: If you let Fluentd read files directly from
/var/log, ensure thefluentuser hasreadpermissions. I once wasted an entire afternoon just because I forgot tochmoda log file. - Leverage Docker Log Drivers: You can configure all Docker containers on other servers to ship logs directly to your central Fluentd instance. This keeps your setup lean by removing the need for agents on every single node.
Building a centralized logging system is a mandatory step if you want to advance in DevOps or SRE. With Fluentd and OpenSearch, you have a weapon that is powerful enough yet light enough to run smoothly on even the cheapest VPS.

