The Problem Without Logging
My automation project started at just 200 lines — everything worked fine, and debugging with print() was enough. Then the project grew to 2,000 lines and started running as a 24/7 service on a VPS. That’s when it hit me: when an error occurs at 3 AM, there’s nothing to look back on except… a black screen.
print() doesn’t save to a file. No timestamps. No idea which module the error came from. And more importantly — when production crashes, you need to know what happened before the crash, not just the final error message.
Python’s standard library logging module was built to solve exactly that problem. Learn it once, use it forever.
Core Concepts to Understand
What Are Log Levels?
Python logging has 5 severity levels, from lowest to highest:
- DEBUG (10) — Technical details, used during development
- INFO (20) — Normal operational information
- WARNING (30) — Something unexpected, not an error yet but worth attention
- ERROR (40) — An error occurred, but the program can still continue
- CRITICAL (50) — A serious error that may cause the program to stop
A logger only records messages with a level equal to or higher than the configured threshold. Set it to WARNING and both DEBUG and INFO messages are silently ignored — useful for reducing noise in production.
Three Components to Know
These three work together — missing any one of them means your logs either won’t appear or will end up in the wrong place:
- Logger — Where you call
log.info(),log.error(), etc. Each logger has a name, typically set to the module name. - Handler — Determines where the log goes: console, file, email, HTTP endpoint, etc.
- Formatter — Determines what the log looks like: whether it includes a timestamp, module name, and how it’s formatted.
Step-by-Step Walkthrough
Step 1: Basic Setup in 5 Lines
import logging
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s"
)
logging.info("Script started")
logging.warning("Warning: config file not found, using defaults")
logging.error("Database connection failed")
Output:
2026-03-25 10:15:32,104 - INFO - Script started
2026-03-25 10:15:32,105 - WARNING - Warning: config file not found, using defaults
2026-03-25 10:15:32,106 - ERROR - Database connection failed
Works immediately, but logs only appear on the console. Close the terminal and they’re gone.
Step 2: Writing Logs to a File
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("app.log", encoding="utf-8"),
logging.StreamHandler() # Still prints to console
]
)
log = logging.getLogger(__name__)
log.info("Database connection established successfully")
Naming your logger with __name__ is the standard approach — when your project has dozens of modules, you can tell at a glance exactly which file a log message came from, no guessing required. This pattern becomes especially valuable in larger projects; for a practical example of how it scales, see automating workflows with Python smtplib where per-module logging helps trace exactly where delivery failures occur.
Step 3: Log Rotation — Prevent Log Files From Bloating
Running a 24/7 service for months without rotation can easily result in log files consuming several gigabytes. On a 20GB VPS, that’s a real problem — and if you’re running that service on Linux, it’s worth reviewing how to optimize Linux server performance for production to keep disk I/O from becoming a bottleneck. RotatingFileHandler handles this cleanly:
import logging
from logging.handlers import RotatingFileHandler
log = logging.getLogger("myapp")
log.setLevel(logging.DEBUG)
# Max 5MB per file, keep 3 backup files
handler = RotatingFileHandler(
"app.log",
maxBytes=5 * 1024 * 1024, # 5MB
backupCount=3,
encoding="utf-8"
)
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
handler.setFormatter(formatter)
log.addHandler(handler)
log.info("Logger is ready")
Result: app.log, app.log.1, app.log.2, app.log.3 — a maximum of about 20MB total, automatically rotating when full.
Step 4: Organizing Logging by Module
As the project grows, each module needs its own logger. Don’t copy-paste setup code everywhere — create a centralized setup function you can call from anywhere:
# logger.py
import logging
from logging.handlers import RotatingFileHandler
def setup_logger(name: str, log_file: str = "app.log", level=logging.INFO):
logger = logging.getLogger(name)
if logger.handlers: # Avoid adding duplicate handlers
return logger
logger.setLevel(level)
fmt = logging.Formatter(
"%(asctime)s [%(name)s] %(levelname)s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
fh = RotatingFileHandler(log_file, maxBytes=5_000_000, backupCount=3)
fh.setFormatter(fmt)
ch = logging.StreamHandler()
ch.setFormatter(fmt)
logger.addHandler(fh)
logger.addHandler(ch)
return logger
Each module only needs two lines:
# database.py
from logger import setup_logger
log = setup_logger(__name__)
def connect(url):
log.info(f"Connecting to {url}")
# ...
log.debug("Connection pool initialized")
Step 5: Logging Exceptions the Right Way
The most common mistake: logging an exception but forgetting the traceback, leaving you with no idea which line the error occurred on.
# WRONG — only the message, traceback is lost
try:
result = 10 / 0
except ZeroDivisionError as e:
log.error(f"Error: {e}")
# CORRECT — full traceback preserved
try:
result = 10 / 0
except ZeroDivisionError:
log.exception("Calculation failed") # Automatically attaches stack trace
# Alternative, equivalent approach
try:
result = 10 / 0
except ZeroDivisionError:
log.error("Calculation failed", exc_info=True)
log.exception() can only be used inside an except block. It’s equivalent to log.error() with a stack trace attached — when you’re debugging a production issue at 2 AM, a full traceback is the most valuable thing you can have. This is the same lesson behind securing AI API keys at 2 AM: proper observability is what separates a quick recovery from hours of blind guessing.
Step 6: Adding Context to Logs — Tracing Concurrent Tasks
A service handling 10 simultaneous tasks, all writing to the same log file — it becomes a mess fast. LoggerAdapter lets you attach context (such as a task ID) to every log line within a scope:
import logging
# Formatter must include %(task_id)s to display context
fmt = logging.Formatter("%(asctime)s [%(task_id)s] %(levelname)s: %(message)s")
handler = logging.StreamHandler()
handler.setFormatter(fmt)
base_log = logging.getLogger("worker")
base_log.addHandler(handler)
base_log.setLevel(logging.DEBUG)
def process_task(task_id: str, data: dict):
log = logging.LoggerAdapter(base_log, {"task_id": task_id})
log.info("Starting task")
log.info(f"Processing {len(data)} items")
# Output:
# 2026-03-25 10:00:01 [abc-123] INFO: Starting task
# 2026-03-25 10:00:01 [abc-123] INFO: Processing 42 items
process_task("abc-123", {})
process_task("xyz-456", {})
Even when log lines from two tasks are interleaved, you can still filter down to exactly the task you need — just run grep abc-123 app.log. If you’re building more complex automation pipelines in Python, the Python requests tutorial for API calls covers how to log HTTP interactions at the right verbosity level for each environment.
Conclusion
Looking back at my automation project — from 200 lines of print() to 2,000 lines with a full logging system — the biggest difference isn’t felt while coding, it’s felt when something goes wrong. Log rotation at 5MB × 3 backups lets you look back over a full week of activity. log.exception() ensures you never lose a traceback again.
A few key things to remember:
- Use
logging.getLogger(__name__)instead ofprint()from day one — low cost, high payoff - Configure
RotatingFileHandlerso logs don’t grow indefinitely - Always use
log.exception()insideexceptblocks — never lose a traceback - Extract the logger setup into its own module to avoid copy-pasting
- Use
INFOin production,DEBUGin development — don’t let DEBUG slip into production
On your next project, set up logging before writing any business logic. Pair it with good code review practices to catch issues before they reach production. When something goes wrong — and it will — you’ll thank yourself for it.

