ComfyUI Setup Guide for Absolute Beginners (2026 Edition)
Step-by-step guide to installing and using ComfyUI for AI image generation with Flux and Stable Diffusion. No command line required.

ComfyUI has become the industry standard for generating high-quality images locally with Flux and Stable Diffusion. But the first time you see it, it can feel intimidating — lots of nodes, wiring, technical lingo.
This guide strips that away. By the end, you'll have ComfyUI installed and running, and you'll understand the three core pieces you actually need to know.
What Is ComfyUI?
ComfyUI is a node-based interface for running image generation models locally. Think of it like Photoshop for AI images — instead of typing a prompt into ChatGPT, you're building a visual "recipe" that tells the model what to do.
Why use ComfyUI instead of online tools like Midjourney or DALL-E?
- Free — no monthly subscription once it's set up
- Fast — runs on your own GPU instead of waiting in a queue
- Full control — you can tweak every part of the generation process
- Works offline — no internet required after setup
- Flux and Stable Diffusion support — both of the best open-source models
The trade-off: it requires a local GPU (graphics card) to run smoothly. If you have a high-end gaming PC or Mac with Apple Silicon, you're in good shape.
What GPU Do You Actually Need?
ComfyUI works best with:
- NVIDIA GPUs (GTX 1660 or newer) — 6GB VRAM minimum, 12GB ideal
- AMD GPUs (RX 6600 or newer) — 6GB VRAM minimum
- Mac with Apple Silicon (M1 or newer) — runs surprisingly well on 8GB
You can run it with less, but expect slower generation times (5–15 seconds per image instead of 2–4 seconds).
Don't have a GPU? Check out Ampere.sh — cloud GPU rental starting at $0.15/hour, perfect for testing.
Step 1: Download and Install Python
ComfyUI runs on Python. If you don't have it installed, start here.
Windows
- Go to python.org/downloads
- Download Python 3.11 (ComfyUI is most stable on 3.11)
- Run the installer
- IMPORTANT: Check the box "Add Python to PATH" during installation
- Click Install
Mac
- Go to python.org/downloads
- Download the Mac installer for Python 3.11
- Run the installer and follow prompts
- After installation, open Terminal and type:
python3 --version— you should see "3.11.x"
Verify Installation
Open Command Prompt (Windows) or Terminal (Mac) and type:
python --version
You should see Python 3.11.x. If you see an error, Python isn't in your PATH — go back to the installer and make sure you checked "Add Python to PATH."
Step 2: Download ComfyUI
ComfyUI lives on GitHub. Here's the easy way to get it:
Windows
- Go to github.com/comfyanonymous/ComfyUI/releases
- Find the latest release (at the top)
- Look for "portable_windows_nvidia.7z" (if you have NVIDIA) or "portable_windows_cpu.7z" (if you don't have a GPU)
- Download the
.7zfile - Extract it to a folder like
C:\ComfyUI
Mac
- Go to the same page: github.com/comfyanonymous/ComfyUI/releases
- Look for "ComfyUI_macos.zip"
- Download and extract to your Applications folder (or anywhere convenient)
Step 3: Download Models
ComfyUI is just the interface — you also need the AI models (Flux, Stable Diffusion, etc.).
This is the part people get confused about. Models are stored in the models/ folder inside ComfyUI. You don't install them like software — you just drop the files in.
Download Flux (Recommended for Beginners)
Flux generates higher-quality images than Stable Diffusion 3.5. It's the industry standard right now.
- Go to huggingface.co/black-forest-labs/FLUX.1-dev
- Click "Files and versions"
- Download flux-1-dev.safetensors (about 23 GB — this will take 10–30 minutes depending on your internet)
- Once downloaded, move the file to:
ComfyUI/models/checkpoints/
Download Stable Diffusion 3.5 (Optional)
If you want a faster alternative (better quality per GB than Stable Diffusion 3, worse than Flux):
- Go to huggingface.co/stabilityai/stable-diffusion-3.5-large
- Download sd3.5-large.safetensors
- Move it to:
ComfyUI/models/checkpoints/
Step 4: Launch ComfyUI
Windows
- Open File Explorer
- Go to your
ComfyUIfolder - Double-click run_nvidia.bat (if you have NVIDIA) or run_cpu.bat (if you don't)
- A command window will open and you'll see lines scrolling
- Wait about 30 seconds, then open your web browser and go to
http://localhost:8188
Mac
- Open Terminal
- Type:
cd /Applications/ComfyUI(adjust path if you put it elsewhere) - Type:
python main.py - After a few seconds, open your browser and go to
http://localhost:8188
You should see a dark interface with nodes on the left side. Congrats — ComfyUI is running.
Step 5: Generate Your First Image
Now the fun part. Here's the simplest workflow:
- Right-click in the empty canvas area
- Look for "Load Checkpoint" and click it — this loads your Flux model
- Click it again and look for "Add Widget" → add a "Positive (positive prompt)" node
- In the text field, type something like:
"a red fox running through snow, detailed, sharp focus" - Add another node: "Sampler (KSampler)"
- Click "Add Widget" on the Sampler → add a "seed" (any random number, like 42)
- Add one more: "VAE Decode" (this converts the image to a viewable format)
- Add a final node: "Save Image"
Now wire them together (this looks confusing but is simple):
- Load Checkpoint → output wire goes to Sampler's "model" input
- Positive prompt → goes to Sampler's "positive" input
- Sampler output → goes to VAE Decode
- VAE Decode output → goes to Save Image
Click the red "Queue Prompt" button and watch it generate. First time usually takes 30–60 seconds because it's loading everything.
What All These Nodes Actually Mean
You don't need to memorize this, but here's a quick cheat sheet:
| Node | What It Does |
|---|---|
| Load Checkpoint | Loads the AI model (Flux, Stable Diffusion) |
| KSampler | The brain — runs the actual image generation |
| Positive Prompt | What you want the image to show |
| Negative Prompt | What you DON'T want (optional) |
| VAE Decode | Converts the model's math into an actual image |
| Save Image | Saves the final image to your computer |
Once you understand these five, you've got 90% of what you need.
Pro Tips for Beginners
Tip 1: Use a fixed seed for testing If you use the same seed number, you get the same image every time. Useful for tweaking prompts and seeing the difference without randomness.
Tip 2: Start simple "a red fox in snow" generates way faster and better than "a red fox, photorealistic, cinematography, shot on 35mm, bokeh background, depth of field, volumetric lighting, shadow detail..."
Tip 3: Save your workflows In ComfyUI, you can save the entire workflow (all the nodes + wiring). Top-left menu → "Save" saves as a .json file. Later, you can load it back and just change the prompt.
Tip 4: Download community workflows The ComfyUI community shares workflows on GitHub and Reddit. Search "ComfyUI Flux workflow" — many are pre-built and ready to drop in.
Tip 5: Batch generation You can generate 10 images in a row by setting "batch_size" to 10 in the KSampler node. Great for testing multiple prompts.
Common Issues & Fixes
"Failed to load model"
The model file might not be fully downloaded, or it's in the wrong folder. Check ComfyUI/models/checkpoints/ and make sure your .safetensors file is there.
"CUDA out of memory" Your GPU ran out of VRAM. Try lowering the resolution (from 1024x1024 to 512x512) or using a smaller model. If it keeps happening, you might need more GPU RAM.
"Connection refused (localhost:8188)"
ComfyUI isn't running. Go back and make sure you ran the .bat file (Windows) or python main.py (Mac).
"Very slow generation (5+ minutes per image)" This is normal on CPU-only. If you have a GPU, make sure ComfyUI detected it. Watch the startup log for "Using CUDA" or "Using MPS" (Mac).
Next Steps
Once you're comfortable with basic generation:
- Explore ControlNet — guides image generation by hand-drawn sketches or reference images
- Try LoRA models — add specific styles (anime, photorealistic, etc.) on top of Flux
- Upscaling — use upscaler nodes to turn 512x512 images into 2K/4K
- Join the ComfyUI community — r/StableDiffusion and the ComfyUI Discord have thousands of pre-built workflows
FAQ
Is ComfyUI free? Yes. The interface is free and open-source. You only pay for models (which are also free) or GPU time if you rent it in the cloud.
Can I use ComfyUI without a GPU? Technically yes, but image generation will be very slow (minutes per image instead of seconds). For serious use, a GPU is worth it. Ampere.sh is an affordable cloud option.
Which model should I start with — Flux or Stable Diffusion? Flux. It generates higher quality images and is easier for beginners to learn on. Stable Diffusion is good if you want faster generation or lower VRAM usage.
Can I use my Mac's GPU? If you have an M1 or newer Mac, yes — ComfyUI will automatically use the Neural Engine. For Intel Macs, it defaults to CPU (slow).
What's the difference between Flux.1-dev and Flux.1-pro? Flux.1-dev is free and open-source. Flux.1-pro is a commercial version with supposedly better quality, but for beginners, dev is plenty powerful.
Where do my generated images go?
ComfyUI saves them in ComfyUI/output/ by default. You can also change the save location in the "Save Image" node.
Can I sell images I generate with ComfyUI? Yes, with caveats. Flux and Stable Diffusion are licensed under open licenses. Check the specific license for the model you use. Generally: if the model is commercial-friendly, you can sell your images.
How do I update ComfyUI? Pull the latest code from GitHub and re-download any models that have been updated. ComfyUI is actively developed, so new features appear regularly.

Alex the Engineer
•Founder & AI ArchitectSenior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.
Related Articles

Kimi K2 Thinking Review: The Free Open-Source AI Beating GPT-5 (And How to Try It)
Kimi K2 Thinking is an open-source AI from Moonshot AI that outscores GPT-5 and Claude Sonnet 4.5 on key benchmarks — and it's completely free to try. Here's what it is and how to use it.

Gemma 4 System Requirements: What You Need to Run It on PC, Mac, and Cloud
Exact VRAM and RAM requirements to run Gemma 4 E2B, E4B, 26B MoE, and 31B locally on Windows, Mac, or in the cloud — plus a step-by-step guide with Ollama.