Why I Switched to ComfyUI Instead of Using the Cloud
About 8 months ago, I was using Midjourney and Leonardo AI to generate images for my blog. The cost wasn’t a big deal, but the real frustration was this: whenever I needed images with a specific style — command line illustrations, simulated terminal screenshots — no cloud tool could get it right.
After trying ComfyUI locally, I realized this was exactly what I had been looking for. Not because it’s free (though that’s a nice bonus), but because of the complete workflow control: pick your model, customize the sampler, chain multiple steps together — all visualized as a node graph.
ComfyUI has a steep learning curve if you’re new to Stable Diffusion. But after a day or two of getting comfortable with it, it blows AUTOMATIC1111 out of the water in terms of flexibility. This guide covers exactly what I did — nothing more, nothing less.
Hardware and Software Requirements
The minimum setup to get ComfyUI running — plus my actual configuration for reference:
- NVIDIA GPU: Minimum 4GB VRAM (8GB+ recommended). I use an RTX 3060 12GB — runs very smoothly.
- RAM: 16GB or more. Some large models may require swap space if RAM is limited.
- Storage: Reserve at least 20GB for models. Checkpoints are typically 2–7GB per file.
- Python: 3.10 or 3.11. Avoid 3.12 as some dependencies aren’t fully compatible yet.
- Git: For cloning the repo and pulling updates.
No NVIDIA GPU? It still works with CPU or AMD (ROCm), but the speed difference is significant — 5–10 minutes per image versus 10–30 seconds on GPU.
Installing ComfyUI
Step 1: Clone the Repository
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
Step 2: Create a Virtual Environment and Install Dependencies
# Create venv
python -m venv venv
# Activate (Linux/macOS)
source venv/bin/activate
# Activate (Windows PowerShell)
.\venv\Scripts\Activate.ps1
# Install PyTorch with CUDA support (NVIDIA GPU)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Install remaining dependencies
pip install -r requirements.txt
Note: the CUDA version in the URL (cu121) must match your installed CUDA toolkit. Check with nvidia-smi — look at the CUDA Version column in the top right.
# Check CUDA version
nvidia-smi
# If using CUDA 12.4, use cu124 instead
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
Step 3: Download a Stable Diffusion Model
ComfyUI doesn’t come with models — you download them from Hugging Face or CivitAI. To get started: pick SDXL Base 1.0 for high quality output, or Realistic Vision if you want photorealistic images.
# Create model directories
mkdir -p models/checkpoints
mkdir -p models/vae
mkdir -p models/loras
# Download model from Hugging Face (SDXL example)
# You need a HuggingFace token for gated models
wget -P models/checkpoints/ https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
Use huggingface-cli for easier model management — especially when downloading multiple files:
pip install huggingface_hub
huggingface-cli login # enter your token from https://huggingface.co/settings/tokens
# Download a specific model
huggingface-cli download stabilityai/stable-diffusion-xl-base-1.0 \
sd_xl_base_1.0.safetensors \
--local-dir models/checkpoints/
Step 4: Start ComfyUI
python main.py
The output will look like:
Starting server
To see the GUI go to: http://127.0.0.1:8188
Open your browser to http://127.0.0.1:8188 — the ComfyUI node graph interface will appear.
Detailed Configuration
Configuring extra_model_paths.yaml
Already using AUTOMATIC1111 and have models downloaded? No need to copy them — just point to the path:
# extra_model_paths.yaml (create in the ComfyUI root directory)
a1111:
base_path: /home/username/stable-diffusion-webui/
checkpoints: models/Stable-diffusion
loras: models/Lora
vae: models/VAE
embeddings: embeddings
Running with Useful Flags
# Allow access from the local network (e.g., from another laptop)
python main.py --listen 0.0.0.0
# Reduce VRAM usage if your GPU has limited memory
python main.py --lowvram
# Use CPU (no GPU or for testing)
python main.py --cpu
# Automatically open the browser
python main.py --auto-launch
# Run on a different port
python main.py --port 8080
On my server, ComfyUI runs with the --listen 0.0.0.0 flag behind an nginx reverse proxy — accessible remotely over HTTPS, stable for about 6 months with no issues. One important note: ComfyUI has no built-in authentication. You need to set up basic auth or an IP whitelist at the nginx level, otherwise anyone can generate images on your machine.
Installing ComfyUI Manager (Essential)
ComfyUI Manager is an extension for managing custom nodes — practically essential if you want to use complex workflows:
cd custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
cd ..
# Restart ComfyUI
python main.py
After installation, the Manager button will appear in the top right of the interface. From here you can install/update custom nodes, models, and missing dependencies in just a few clicks.
Basic Workflow: Text to Image
ComfyUI comes with a default workflow. Key nodes to understand:
- Load Checkpoint: select your model (.safetensors)
- CLIP Text Encode: enter your positive and negative prompts
- KSampler: adjust steps, CFG scale, and sampler (euler, dpm++ 2m, …)
- VAE Decode: convert latent space back to a real image
- Save Image: save the output
My go-to settings: DPM++ 2M Karras, 25 steps, CFG 7.0. On an RTX 3060 12GB, a 1024×1024 image completes in about 18–22 seconds — fast enough to iterate continuously.
Checking and Monitoring
Verifying GPU Usage
# Terminal 1: run ComfyUI
python main.py
# Terminal 2: monitor GPU usage in real time
watch -n 1 nvidia-smi
While generating images, the GPU-Util column should hit 80–100%. If you only see 0–5%, ComfyUI is falling back to CPU — check your CUDA installation.
Checking the Python Environment
import torch
print(torch.__version__) # Check version
print(torch.cuda.is_available()) # Should return True
print(torch.cuda.get_device_name(0)) # Your GPU name
Logging and Debugging
ComfyUI logs directly to the terminal. To save the output:
python main.py 2>&1 | tee comfyui.log
Common errors and how to fix them:
- CUDA out of memory: add
--lowvramor reduce the resolution to 512×512 first. - Module not found: re-run
pip install -r requirements.txtinside your active venv. - Model not showing in dropdown: check the file path in
models/checkpoints/, refresh your browser, or restart ComfyUI. - Workflow reports missing nodes: open ComfyUI Manager → Install Missing Custom Nodes.
Auto-Restart with systemd (Linux)
Running ComfyUI on a VPS or Linux machine 24/7? Create a systemd service so it automatically restarts on crash:
# /etc/systemd/system/comfyui.service
[Unit]
Description=ComfyUI Stable Diffusion
After=network.target
[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/ComfyUI
ExecStart=/home/ubuntu/ComfyUI/venv/bin/python main.py --listen 0.0.0.0
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable comfyui
sudo systemctl start comfyui
sudo systemctl status comfyui
A Few Tips After 8 Months of Real-World Use
- Save your workflow as JSON (Save from the menu) before making major changes — rolling back is easy.
- Install ControlNet via Manager if you need precise pose or composition control.
- SDXL Turbo and FLUX.1 schnell generate images much faster (4–8 steps instead of 25+) — great for rapid iteration on lower-end hardware.
- The
output/folder fills up fast — after a month of use, I had over 3GB of images. Set up a cron job to clean it out periodically. - The API endpoint
http://localhost:8188/promptaccepts jobs via POST — you can call it from a Python script for automated batch generation, no browser needed.
