Why I needed a local AI assistant
It was 2 AM, and the production server started throwing 502 errors all over the place. I needed to quickly write a bash script to parse the logs — but my hands were shaking and my brain was completely fried. I opened ChatGPT, pasted in the log output… and then stopped.
That log contained customer IPs, internal service names, and even part of a connection string. Sending it up to the cloud at that moment was genuinely not okay.
That was the first time I started looking for an AI solution that runs completely offline — no data leaks, no GDPR worries, no company policy headaches. IronClaw is what I found.
IronClaw is a CLI AI assistant written in Rust, designed from the ground up with two priorities: speed and privacy. No telemetry, no outbound requests unless you want them. It supports running with local models via Ollama or connecting to the API provider of your choice.
Installing IronClaw
System requirements
- Rust 1.75+ (stable toolchain)
- Linux/macOS — Windows supported via WSL2
- Minimum 4GB RAM if using a local model (8GB+ recommended)
- Git
Install Rust if you don’t have it
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustc --version
Install IronClaw via Cargo
The fastest approach — cargo builds directly from source, no manual cloning required:
cargo install ironclaw
# Verify the installation
ironclaw --version
If you want to customize feature flags, build from source instead:
git clone https://github.com/ironclaw-ai/ironclaw.git
cd ironclaw
# Release build (performance-optimized)
cargo build --release
# Copy binary to PATH
sudo cp target/release/ironclaw /usr/local/bin/
ironclaw --version
The first build will take 3–5 minutes depending on your machine, since Rust compiles all dependencies from scratch. Totally normal — don’t worry about it.
Detailed configuration
Initialize config for the first time
ironclaw init
This creates a configuration file at ~/.config/ironclaw/config.toml. Opening it reveals the following structure:
[core]
# Choose provider: "ollama" (local), "anthropic", "openai", "openrouter"
provider = "ollama"
default_model = "mistral:7b"
# Disable telemetry completely
telemetry = false
# Don't persist conversations to the cloud
persist_remote = false
[privacy]
# Redact patterns from prompts before sending (even when using an API)
auto_redact = true
redact_patterns = [
"\\b\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\b", # IP addresses
"(?i)password\\s*=\\s*\\S+", # Password strings
"sk-[a-zA-Z0-9]{48}", # OpenAI API keys
]
[ollama]
host = "http://localhost:11434"
[anthropic]
# Fill in if using the Anthropic API
api_key = "" # Or set env var ANTHROPIC_API_KEY
model = "claude-haiku-4-5-20251001"
[context]
# Number of context tokens to retain between turns
max_context_tokens = 4096
# Store conversations locally (encrypted)
local_history = true
history_path = "~/.local/share/ironclaw/history"
Don’t have Ollama? One command does it
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model (pick one)
ollama pull mistral:7b # Balanced speed/quality, 4.1GB
ollama pull codellama:7b # Coding-focused
ollama pull llama3.2:3b # Lightweight, works well with less RAM
Then test it right away:
ironclaw ask "Write a bash script to check disk usage and alert if it exceeds 80%"
Using IronClaw as a pipe in the terminal
This is the feature I use most — pipe command output directly into IronClaw:
# Analyze error logs directly
tail -n 100 /var/log/nginx/error.log | ironclaw ask "What errors are occurring?"
# Review code before committing
git diff HEAD~1 | ironclaw ask "Review this code and find potential bugs"
# Explain a complex config file
cat /etc/nginx/nginx.conf | ironclaw ask "Explain this config"
Interactive mode (chat session)
# Start a chat session
ironclaw chat
# Chat with a specific model
ironclaw chat --model codellama:13b
# Chat with a custom system prompt (useful for specific use cases)
ironclaw chat --system "You are a senior DevOps engineer. Keep answers concise and practical."
Advanced auto-redact configuration
Production logs are a goldmine of sensitive information — internal IPs, JWT tokens, database connection strings. None of that should travel along with prompts sent to the cloud. IronClaw handles this by automatically masking sensitive data before your prompt goes anywhere:
[privacy]
auto_redact = true
redact_patterns = [
# Internal IPs
"192\\.168\\.\\d+\\.\\d+",
"10\\.\\d+\\.\\d+\\.\\d+",
# JWT tokens
"eyJ[A-Za-z0-9-_]+\\.[A-Za-z0-9-_]+\\.[A-Za-z0-9-_]+",
# Database connection strings
"(?i)(mysql|postgresql|mongodb):\\/\\/[^\\s]+",
# AWS keys
"AKIA[0-9A-Z]{16}",
]
# Replacement string when redacting
redact_placeholder = "[REDACTED]"
Verification and Monitoring
Verify IronClaw isn’t sending traffic outside
Being a little paranoid is perfectly reasonable here. With Ollama, do a quick check to confirm IronClaw is really only talking to localhost:
# Run IronClaw in one terminal
ironclaw ask "Hello" &
# Monitor network connections in another terminal
ss -tnp | grep ironclaw
# Should only show a connection to localhost:11434 (Ollama)
For even more certainty, use strace to inspect all network syscalls:
strace -e network ironclaw ask "Test" 2>&1 | grep -v "127.0.0.1\|localhost"
Check conversation history
IronClaw stores history locally in encrypted format. Here’s how to manage it:
# List saved sessions
ironclaw history list
# Delete a specific session
ironclaw history delete
# Clear all history
ironclaw history clear
Performance benchmark
# Test response time with the current model
ironclaw benchmark --model mistral:7b --prompt "Explain Docker in one paragraph"
# Sample output:
# Model: mistral:7b
# First token: 0.34s
# Total: 4.2s
# Tokens/sec: 28.6
Logs and debugging
# Enable verbose logging
IRONCLAW_LOG=debug ironclaw ask "Test"
# View the log file
tail -f ~/.local/share/ironclaw/ironclaw.log
# Check the active configuration
ironclaw config show
Set up convenient aliases
Add these to your ~/.bashrc or ~/.zshrc:
# Quick ask
alias ai='ironclaw ask'
# Explain a command
explain() { man "$1" 2>/dev/null | head -50 | ironclaw ask "Briefly explain the $1 command"; }
# Review git diff before pushing
alias review-diff='git diff | ironclaw ask "Review this code, find bugs and security issues"'
After two months running IronClaw in production, I no longer hesitate to paste server logs into an AI. Everything is processed locally, sensitive data gets redacted automatically — not a single byte leaves the machine unless I explicitly choose otherwise. If your team operates under SOC 2, HIPAA, or simply has a manager who asks “where does our AI data actually go?” — IronClaw is the cleanest answer I’ve found.

