After nearly six months of testing all kinds of AI assistants — from cloud APIs to self-hosted web UIs — I realized I spend more time in the terminal than in any other interface. That’s what led me to OpenClaw: an open-source CLI assistant that’s lightweight, works offline, and doesn’t send your data anywhere when paired with a local model.
Comparing Approaches to a Personal AI Assistant
Before settling on my current stack, I tried all three approaches. Each one gave me a reason to start — and a reason to move on.
Calling Cloud APIs Directly
OpenAI, Claude, Gemini — the models are powerful, but using them daily is inconvenient. Every query meant opening a script, there was no memory between sessions, and costs added up faster than expected — I once spent $15/month just asking trivial debugging questions with GPT-4o-mini. Not to mention pasting internal code or production configs to the cloud is a risk that’s simply not worth taking.
Self-Hosted Web UI (Open WebUI, LibreChat)
If you need a polished interface for demos or sharing with a team, Open WebUI or LibreChat do the job well. But for personal use, it’s overkill: Docker alone consumes 400–600MB of RAM even when nobody’s using it, and every time you want to ask a quick question you have to open a browser tab and switch focus away from your terminal.
CLI Assistants Like OpenClaw
Type your question right in the terminal — without ever leaving where you’re working. No Docker, no browser tabs, session memory stored locally per project. For developers and sysadmins who live on the command line, this is the most natural workflow.
An Honest Look at OpenClaw’s Pros and Cons
Advantages
- Complete privacy: Paired with Ollama, all your data stays on your machine — nothing leaves your network
- Lightweight: No Docker, no web server — just a Python package under 50MB
- Session memory: Stores conversation history in context, with sessions named per project
- Multi-backend: Supports Ollama, OpenAI API, and Anthropic Claude — switch backends without reconfiguring everything
- Pipe-friendly: Reads from stdin, pipes output — integrates smoothly into shell scripts and automated workflows
Drawbacks to Know Upfront
- No web UI — not suitable if you need to share with people who aren’t comfortable in a terminal
- Initial configuration takes some time, especially setting up the backend and model
- Multimodal support (images, PDFs) is limited compared to full-featured web UIs
- Smaller community than larger projects like Open WebUI
When to Use OpenClaw — and When Not To
OpenClaw is a great fit if you work primarily in the terminal and need quick answers without losing focus. It’s especially useful when handling sensitive data — internal code, database schemas, production configs — that you can’t paste into a cloud service. On the other hand, if you need to share with non-technical users or frequently work with images and PDFs, Open WebUI is still the stronger choice.
Installing OpenClaw
System Requirements
- Python 3.10+
- Ollama installed and running (if using a local model) — see the guide on running LLMs locally with Ollama
- Or an OpenAI / Anthropic API key if using a cloud backend
Install via pip
# Install OpenClaw into a venv (recommended)
python -m venv ~/.openclaw-venv
source ~/.openclaw-venv/bin/activate
pip install openclaw
# Verify the installation
openclaw --version
Install from Source for the Latest Features
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pip install -e .
# Verify
openclaw --version
Initial Configuration
After installing, run the init command to create the default config file:
openclaw init
# Creates file at: ~/.config/openclaw/config.yaml
Open the config file and adjust it for the backend you’re using:
# ~/.config/openclaw/config.yaml
# Default backend: Ollama (local, best privacy)
backend: ollama
ollama:
url: http://localhost:11434
model: llama3.2 # or qwen2.5, mistral, phi4
# To use OpenAI API:
# backend: openai
# openai:
# api_key: sk-...
# model: gpt-4o-mini
# To use Anthropic:
# backend: anthropic
# anthropic:
# api_key: sk-ant-...
# model: claude-haiku-4-5-20251001
# Memory: how many messages to keep in context
memory:
enabled: true
max_history: 20
# Directory for storing sessions
sessions:
dir: ~/.openclaw/sessions
Test the Backend Connection
openclaw check
# Output when successful:
# ✓ Backend: ollama
# ✓ Model: llama3.2 (4.7GB)
# ✓ Connection: OK
# ✓ Response time: 320ms
Using OpenClaw in a Real Workflow
Quick One-Off Questions Without Context (One-Shot Mode)
# Simple question
openclaw ask "Which command shows listening ports on Linux?"
# Analyze logs via pipe
cat /var/log/nginx/error.log | tail -50 | openclaw ask "Analyze these errors and suggest fixes"
# Review a code file
openclaw ask --file main.py "Review this code and identify any security issues"
Chat Sessions With Per-Project Memory
# Start a new session
openclaw chat
# Resume a named session
openclaw chat --session wp-project
# List all existing sessions
openclaw sessions list
Integrating Into Your Shell Workflow
# Analyze Kubernetes pod status
kubectl get pods --all-namespaces | openclaw ask "Which pods are having issues?"
# Auto-generate a git commit message
git diff --staged | openclaw ask "Write a concise git commit message in English"
# Explain a complex command before running it
openclaw ask "Explain what this command does: find / -name '*.conf' -mtime -7 -exec grep -l 'password' {} +"
Tips After 6 Months of Real-World Use
Per-Project System Prompts
Each of my projects has its own session with a system prompt containing the stack’s context — PHP version, database, WordPress version. No need to re-explain things every time; open the session and get straight to work:
openclaw chat --session wp-project \
--system "You are a WordPress developer. Stack: PHP 8.2, MySQL 8.0, WP 6.4. \
Keep answers concise and prioritize practical code snippets."
Aliases for Faster Typing
# Add to ~/.bashrc or ~/.zshrc
alias ai='openclaw ask'
alias aic='openclaw chat'
alias explain='openclaw ask "Explain this command: "'
# Apply changes
source ~/.bashrc
# Usage
ai "Regex syntax for validating email in Python"
explain "awk '{sum += \$1} END {print sum}'"
Export Sessions to Markdown for Later Reference
# Export an important conversation to a file
openclaw sessions export wp-project --format markdown > notes/wp-project-notes.md
This setup has been running smoothly on two Ubuntu 22.04 VPS instances for six months — no issues, no memory leaks. My most-used pipeline is piping a service’s stderr into OpenClaw for real-time error analysis — much faster than copy-pasting into a web UI.
# Watch logs and ask about errors as they appear
journalctl -u myapp -f | grep ERROR | while read line; do
echo "$line" | openclaw ask "What does this error mean and how do I fix it?"
done
Conclusion
I still keep both Open WebUI and OpenClaw around, but each serves a different purpose: Open WebUI for when I need a graphical interface or need to handle complex files, and OpenClaw for every quick question I have in the terminal day to day. If you want to avoid cloud dependency and don’t want to run extra Docker containers just to talk to an AI, this is a great starting point — lightweight, flexible, and your data stays on your machine.

