Guide to Using SQLite with Python: Lightweight Database for Automation Scripts and Small Applications

Database tutorial - IT technology blog
Database tutorial - IT technology blog

Quick Start (Get started in 5 minutes)

2 AM, the server reported an error; a nightly batch script stopped abruptly due to missing config. I needed a quick place to temporarily store some parameters without touching the production database. At that moment, SQLite was the savior. No installation, no configuration needed, just a single file. Here’s how I usually get a database up and running in under 5 minutes:

Step 1: Connect to (or create a new) database

import sqlite3

# Connect to the database; if 'my_app.db' doesn't exist, it will be created automatically
# Use ':memory:' if you want the database to exist only in RAM
conn = sqlite3.connect('my_app.db')
cursor = conn.cursor()
print("Database connection successful!")

Step 2: Create a table

A table is needed to store data. Suppose I want to store script error information:

cursor.execute('''
    CREATE TABLE IF NOT EXISTS error_logs (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        timestamp TEXT NOT NULL,
        error_message TEXT NOT NULL,
        severity TEXT
    )
''')
conn.commit()
print("Table 'error_logs' created successfully!")

Step 3: Insert data

Log errors. Always use parameterization to prevent SQL injection. I once spent half a day debugging because I forgot this:

import datetime

now = datetime.datetime.now().isoformat()
error_msg = "Error reading config file: file not found"
severity = "CRITICAL"

cursor.execute("INSERT INTO error_logs (timestamp, error_message, severity) VALUES (?, ?, ?)",
               (now, error_msg, severity))
conn.commit()
print("Data inserted successfully!")

Step 4: Query data

Review the logged errors:

cursor.execute("SELECT * FROM error_logs")
rows = cursor.fetchall()

for row in rows:
    print(row)

print("\nCRITICAL errors:")
cursor.execute("SELECT * FROM error_logs WHERE severity = ?", ("CRITICAL",))
critical_errors = cursor.fetchall()
for error in critical_errors:
    print(error)

Step 5: Don’t forget to close the connection

After finishing, always close the connection to release resources, prevent file lock errors, or incomplete data writes:

conn.close()
print("Database connection closed.")

In just a few lines of Python code, you have a fully functional database. Quick and easy to save the day in the middle of the night.

Detailed Explanation (Dissecting Each Piece)

I’ve worked with MySQL, PostgreSQL, and MongoDB — each with its own strengths. MySQL is fast and easy for web apps. PostgreSQL is more advanced with enterprise features and high reliability. MongoDB is flexible with unstructured data. But there are times, like at 2 AM, when you need a quick place to temporarily store some config data or logs without touching a large production database, and that’s when SQLite truly becomes a lifesaver.

What is SQLite and why use it?

SQLite is a compact, self-contained C library that requires no separate server or complex configuration. The entire database resides in a single file on disk (or in RAM). It is a powerful database, compliant with ACID standards.

Key Advantages:

  • Zero-configuration: Just import sqlite3 in Python.
  • File-based: The database is contained in a single file. Easy to back up, move, and share.
  • Compact and fast: Lightweight library, good performance for single-user tasks.
  • Free and open source.
  • Built-in to Python.

When does SQLite shine?

  • Automation scripts/Internal tools: Store logs, cache, configurations for Python scripts.
  • Desktop/mobile applications: Store offline data.
  • Development and testing: A quick and easy database for testing data features.
  • Lightweight websites: Websites with low traffic, not requiring many concurrent users writing data.

Table Structure and Data Types

SQLite supports basic data types: NULL, INTEGER, REAL, TEXT, BLOB. The familiar table creation syntax:

CREATE TABLE IF NOT EXISTS users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    username TEXT NOT NULL UNIQUE,
    email TEXT
);
  • PRIMARY KEY, AUTOINCREMENT, NOT NULL, UNIQUE: Basic constraints.

Basic Operations with Python (CRUD: Create, Read, Update, Delete)

Connecting and Closing Connections

Always remember conn.commit() to save changes and use a with statement for safety:

import sqlite3

try:
    with sqlite3.connect('my_app.db') as conn:
        cursor = conn.cursor()
        # Database operations...
        cursor.execute("INSERT INTO error_logs (timestamp, error_message, severity) VALUES (?, ?, ?)",
                       ("2026-03-22T14:30:00", "Test Error", "INFO"))
        # commit/rollback are automatically handled when exiting the 'with' block
except sqlite3.Error as e:
    print(f"An error occurred: {e}")

Inserting Data (INSERT)

Use ? or :placeholder for values to prevent SQL injection:

cursor.execute("INSERT INTO users (username, email) VALUES (?, ?)", ("nguyenvana", "[email protected]"))
cursor.execute("INSERT INTO users (username, email) VALUES (:user, :email)",
               {"user": "lethib", "email": "[email protected]"})
conn.commit()

Querying Data (SELECT)

Results are returned as a list of tuples by default:

cursor.execute("SELECT id, username, email FROM users WHERE username LIKE 'nguyen%'")
users = cursor.fetchall()
for user in users:
    print(f"ID: {user[0]}, Username: {user[1]}, Email: {user[2]}")

Updating Data (UPDATE)

cursor.execute("UPDATE users SET email = ? WHERE username = ?",
               ("[email protected]", "nguyenvana"))
conn.commit()

Deleting Data (DELETE)

cursor.execute("DELETE FROM users WHERE username = ?", ("lethib",))
conn.commit()

Always remember conn.commit() after each data modification operation.

Advanced (So you don’t have to stay up late again)

As scripts become more complex or data volume grows, you’ll need a few techniques to keep things smooth. Don’t let your database become sluggish and force you into late-night debugging sessions.

Optimizing Queries with Indexes

Indexes help the database search data faster, especially on columns used in WHERE or ORDER BY clauses. If a query runs slowly, it’s very likely due to a missing index.

CREATE INDEX idx_error_severity ON error_logs (severity);
CREATE INDEX idx_users_username ON users (username);

Indexes also consume resources when adding, modifying, or deleting data. Only create indexes on truly necessary columns.

Connection Mode and Journal Mode (Performance when needed)

By default, SQLite ensures high data integrity, but this can sometimes impact performance. You can fine-tune it when you’re sure what you’re doing (e.g., importing a large amount of offline data):

cursor.execute("PRAGMA journal_mode = WAL;") # Write-Ahead Logging
cursor.execute("PRAGMA synchronous = NORMAL;") # Or OFF (be careful!)
  • journal_mode = WAL: Increases write performance and allows multiple readers to access the database simultaneously with a writer. Recommended for most cases requiring higher performance.
  • synchronous = OFF: Disables immediate data synchronization to disk. Extremely dangerous in case of power loss; data may be lost/corrupted. Only use if you understand the risks. NORMAL offers a better balance.

Using Row objects (Accessing data by column name)

By default, cursor.fetchone()/fetchall() return tuples (row[0]). Use sqlite3.Row to access data by column name (row['column_name']), which makes code more readable and maintainable:

with sqlite3.connect('my_app.db') as conn:
    conn.row_factory = sqlite3.Row
    cursor = conn.cursor()

    cursor.execute("SELECT id, username, email FROM users WHERE id = 1")
    user = cursor.fetchone()

    if user:
        print(f"User ID: {user['id']}, Username: {user['username']}, Email: {user['email']}")

Practical Tips (Hard-earned lessons)

After many nights struggling with various databases, I’ve learned a few important lessons when working with SQLite:

  • Database path: Always be careful with the database file path. If you just use sqlite3.connect('my_db.db'), it will create the my_db.db file directly in the directory where your Python script runs. It’s best to use an absolute path or define it clearly in your config to avoid the “database is nowhere to be found” situation.
  • Easy Backup: SQLite’s biggest strength is backup. Just copy the .db file for a complete backup. But remember to ensure no processes are writing to the database during backup, otherwise, the file could be corrupted. Alternatively, use SQLite’s Online Backup API if you need to back up while the database is active.
  • Concurrency Handling: This is an inherent weakness of SQLite. It only supports a single writer at a time. This means if multiple processes/threads write to the database concurrently, other processes will have to wait or receive a database is locked error. If an application needs hundreds or thousands of concurrent write requests, SQLite is not the right choice. Consider proper database servers like MySQL, PostgreSQL, or MongoDB, which I mentioned.
  • Management Tools: I often use DBeaver or DB Browser for SQLite. They help you easily view table structures, data, and run queries without writing Python code.
  • When NOT to use SQLite?
    • Web applications requiring high scalability and many concurrent writers: Traditional database servers like MySQL, PostgreSQL handle these better.
    • Very large data (terabytes): Managing a .db file of hundreds of GBs/TBs is more difficult.
    • Needing network data access: SQLite does not support remote network access. If multiple servers need to access a single database, you need a database server.
  • Integration into automation scripts: I often use SQLite to store script states (e.g., which record has been processed), log detailed errors, or cache expensive API calls. It is very efficient and reliable.

Remembering these points will help you avoid many troubles and optimize your use of SQLite in your Python projects. Don’t let it become a trap that forces you to stay up late again!

Share: