Why PostgreSQL Default Logs Aren’t Enough
As a DevOps engineer or Database Administrator (DBA), you’ve likely faced a frustrating situation: waking up to find a critical users table missing or order data modified, only to find vague information in the system logs. PostgreSQL has a log_statement parameter, but setting it to all can cause log files to hit tens of GBs within hours because it records everything, including internal system commands.
The problem is that we need detail with selectivity. We need to know exactly Who deleted a table, When they did it, and Which records were affected. This is a mandatory requirement for auditing and meeting strict security standards like PCI-DSS or SOC2. That’s why I choose pgaudit (PostgreSQL Audit Extension) as the solution.
pgaudit – The “Divine Eye” for Your Database
Unlike standard logging mechanisms, pgaudit allows for extremely detailed recording of database actions through PostgreSQL’s standard logging infrastructure. In my experience, pgaudit typically incurs only about 5-7% performance overhead—a perfectly acceptable trade-off for the security benefits it provides.
There are two core modes you should distinguish between:
- Session Audit Logging: Records all activities of a specific user during their session. This is the most common way to monitor high-privileged accounts (Superusers).
- Object Audit Logging: Focuses on sensitive tables. For example, you might only want to track who touched the
credit_cardstable while ignoring less critical logs to save resources.
A major plus is that pgaudit doesn’t replace the old logging mechanism but runs alongside it, making centralized log management through tools like ELK incredibly simple.
Getting Started: Installing pgaudit on Your Server
Here are the steps I performed on Ubuntu 22.04 and PostgreSQL 15. For other versions, simply change the version number accordingly.
1. Install the Package
First, download the extension from the official OS repository:
sudo apt update
sudo apt install postgresql-15-pgaudit
2. Configure System Settings
Once installed, pgaudit isn’t active yet. You need to force Postgres to “load” it on startup by editing the postgresql.conf file (usually located at /etc/postgresql/15/main/postgresql.conf).
Find the shared_preload_libraries line and add pgaudit:
shared_preload_libraries = 'pgaudit'
Then, restart the service for the changes to take effect. Remember to check the service status after restarting:
sudo systemctl restart postgresql
3. Enable the Extension in the Database
The final step is to log into psql and create the extension for the target database:
CREATE EXTENSION pgaudit;
Run the \dx command to ensure pgaudit appears in the list of installed extensions.
Detailed Configuration: Auditing the Right Actions
This is the most crucial part to avoid being flooded with logs. pgaudit categorizes commands into several groups: READ, WRITE, FUNCTION, ROLE, DDL, MISC, and more.
A practical use case I often employ is logging only schema and data changes to save disk space:
-- Only log DDL commands (create/drop tables), WRITE (insert/update), and ROLE (create user)
ALTER SYSTEM SET pgaudit.log = 'write, ddl, role';
-- Don't forget to reload for Postgres to apply the new configuration
SELECT pg_reload_conf();
Note: If you enable READ for tables with millions of queries per minute, your log files will balloon rapidly. Consider this carefully!
Pro tip: When dealing with massive logs or needing to convert customer data from CSV to JSON to debug query errors, I often use the tool at toolcraft.app/en/tools/data/csv-to-json. It runs entirely in the browser, so sensitive data isn’t uploaded to a server—extremely safe for DB pros.
Testing: Is the “Camera” Rolling?
Let’s try creating a table and deleting it to see what the logs show:
CREATE TABLE secret_info (id serial, content text);
INSERT INTO secret_info (content) VALUES ('Sensitive customer data');
DROP TABLE secret_info;
Open the log file at /var/log/postgresql/postgresql-15-main.log, and you’ll see these “golden” lines:
AUDIT: SESSION,1,1,DDL,CREATE TABLE,TABLE,public.secret_info,CREATE TABLE secret_info (id serial, content text);,<not logged>
AUDIT: SESSION,2,1,WRITE,INSERT,TABLE,public.secret_info,INSERT INTO secret_info (content) VALUES ('Sensitive customer data');,<not logged>
AUDIT: SESSION,3,1,DDL,DROP TABLE,TABLE,public.secret_info,DROP TABLE secret_info;,<not logged>
The information is crystal clear: you know exactly which command dropped the secret_info table and when. No more excuses!
Survival Tips for Performance and Storage
Don’t let security enthusiasm crash your server. Continuous logging can exhaust disk I/O if not optimized:
- Smart Log Filtering: Only enable
pgaudit.log = 'all'when strictly necessary for debugging serious issues for short periods. - Configure Log Rotation: Ensure PostgreSQL automatically rotates log files by date or size (e.g., 100MB). Otherwise, the server might freeze due to a full disk overnight.
- Offload Logs: For production systems, use
rsyslogorFluentdto centralize logs in ELK or Grafana Loki. This allows for fast searching without taxing the database server.
Final Thoughts
Implementing pgaudit is a professional step toward upgrading your PostgreSQL security. It removes ambiguity and makes incident investigation many times faster than sifting through default logs.
I hope this article gives you more confidence when working with Database Auditing. If you encounter any issues during installation, feel free to leave a comment, and I’ll help you out!

