Productivity9 min read· April 19, 2026

How to Install ComfyUI for Flux Image Generation: The Complete Beginner's Guide

Learn how to install ComfyUI on Windows, Mac, and Linux for AI image generation with Flux, SDXL, and Stable Diffusion 3. Step-by-step setup, GPU optimization, and workflow tips.

How to Install ComfyUI for Flux Image Generation: The Complete Beginner's Guide

ComfyUI is the professional-grade interface for AI image generation. If you use Stable Diffusion, Flux, or SDXL, ComfyUI gives you pixel-perfect control over every generation step — something web UIs like Civitai and Hugging Face can't touch.

The downside? ComfyUI's node-based workflow looks intimidating at first. This guide walks you through installation, basic setup, and your first Flux image generation in under 30 minutes.

Why ComfyUI for Image Generation?

Speed: SDXL generates in 2–5 seconds (with NVIDIA RTX). Web UIs: 30–60 seconds.

Cost: Free and local. No credit card, no queues, no per-image fees.

Control: Every parameter exposed — prompt weighting, LoRA blending, upscaling, inpainting, batch generation.

Monetization: Generate unlimited images for affiliate YouTube videos, product mockups, social media content, client work.

Beginner-Friendly: Templates and workflow presets remove the "scary node setup" barrier.

The downside? You need a GPU (NVIDIA RTX 30/40/50 series recommended; works on AMD too, slower; no Apple Metal support yet).

Before You Install: System Requirements

Minimum GPU Requirements

GPU VRAM Speed Recommendation
RTX 4090 24GB 2–4 sec Best — pro-grade
RTX 4080 12GB 3–5 sec Excellent
RTX 4070 Ti 12GB 4–6 sec Good (all models)
RTX 4070 8GB 6–10 sec Good (SDXL, smaller Flux)
RTX 3090 24GB 5–8 sec Usable (2022 standard)
RTX 4060 8GB 15–30 sec Marginal (memory optimizations needed)
Apple Silicon (M1/M2/M3) Shared RAM Not supported Use Hugging Face web UI instead

CPU + RAM: 16GB RAM minimum, any modern CPU (Intel i7+, Ryzen 5+).

Storage: 50GB free (for models, ComfyUI, dependencies).

Internet: 20–50 GB download (full model suite; can start with just Flux or SDXL).

Do You Have Enough VRAM?

Run this command to check:

nvidia-smi

Look for "Memory" row. If you have 8GB+ VRAM, you're good to start.

Not enough VRAM? Use memory optimization flags (we'll cover this in Step 5).

Step-by-Step ComfyUI Installation

Step 1: Install NVIDIA CUDA Toolkit (Windows/Linux only)

ComfyUI needs CUDA 11.8 or 12.x to run on NVIDIA cards.

For Windows:

  1. Go to https://developer.nvidia.com/cuda-12-1-0-download-archive
  2. Select: Windows → x86_64 → Windows 10/11 → exe (network)
  3. Download (~2.5GB)
  4. Run installer, choose "Express Installation"
  5. Restart your computer
  6. Verify: Open PowerShell, type nvidia-smi, you should see your GPU listed

For Linux (Ubuntu 22.04):

# Add NVIDIA repo
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-ubuntu2204.pin
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
sudo apt-get update
sudo apt-get install cuda-toolkit-12-1

# Add to PATH
echo 'export PATH=/usr/local/cuda-12.1/bin:$PATH' >> ~/.bashrc
source ~/.bashrc

For Mac: Skip CUDA. Use Hugging Face web UI or Replicate API instead (ComfyUI not optimized for Metal yet).

Step 2: Install Python 3.11

ComfyUI works best with Python 3.11. Do not use 3.12 (some dependencies fail).

Windows:

  1. Go to https://www.python.org/downloads/release/python-3118/
  2. Download "Windows installer (64-bit)"
  3. Run installer
  4. IMPORTANT: Check "Add Python 3.11 to PATH"
  5. Click Install Now
  6. Verify: Open PowerShell, type python --version → should show Python 3.11.x

Mac/Linux:

# Using Homebrew (Mac) or apt (Linux)
# Mac:
brew install python@3.11

# Linux:
sudo apt install python3.11 python3.11-venv

# Verify
python3.11 --version

Step 3: Download ComfyUI

Clone the official ComfyUI repository:

Windows (PowerShell):

cd Desktop
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI

Mac/Linux:

cd ~
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI

If you don't have Git installed:

Step 4: Set Up Python Virtual Environment

# Create virtual environment
python3.11 -m venv venv

# Activate it
# Windows:
.\venv\Scripts\activate
# Mac/Linux:
source venv/bin/activate

After activation, your terminal should show (venv) at the left.

Step 5: Install ComfyUI Dependencies

pip install -r requirements.txt

This installs PyTorch, PIL, NumPy, and other libraries (~3–5 GB download, 5–10 min).

If you have 8GB VRAM or less, add memory optimization:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
# Then edit launch_windows.bat or launch_unix.sh (next step) to add: --normalvram --preview-method auto

Step 6: Launch ComfyUI

Windows — Double-click run_nvidia_gpu.bat

Mac/Linux — Open terminal in ComfyUI folder and run:

source venv/bin/activate
python main.py --listen 0.0.0.0

You should see:

ComfyUI running on: http://127.0.0.1:8188

Open http://127.0.0.1:8188 in your browser. You should see the ComfyUI node interface.

Step 7: Download Your First Model (Flux or SDXL)

ComfyUI doesn't ship with models. You need to download them:

For Flux (best results, 1.7 min/image on RTX 4090):

  1. Open terminal in ComfyUI folder
  2. Download Flux model:
# Using HF CLI (install with: pip install huggingface-hub)
huggingface-cli download black-forest-labs/FLUX.1-dev --local-dir ComfyUI/models/checkpoints/flux

OR manually:

  1. Go to https://huggingface.co/black-forest-labs/FLUX.1-dev
  2. Download flux1-dev.safetensors (23.5 GB)
  3. Place in ComfyUI/models/checkpoints/

For SDXL (faster, smaller, 5–10 sec):

huggingface-cli download stabilityai/stable-diffusion-xl-base-1.0 --local-dir ComfyUI/models/checkpoints/sdxl

For Stable Diffusion 3.5 (balanced):

huggingface-cli download stabilityai/stable-diffusion-3-5-large --local-dir ComfyUI/models/checkpoints/sd3

Download sizes:

  • Flux: 23.5 GB (best quality, 1–2 min per image)
  • SDXL: 6.9 GB (good quality, 5–10 sec per image)
  • SD 3.5: 8.5 GB (good quality, 10–15 sec)

Pick one to start. You can add more later.

GPU performance comparison for Flux

Your First Image Generation

Once ComfyUI is running (http://127.0.0.1:8188):

  1. Load a template workflow: Click "Add Node" → Search "Load Checkpoint" → Select Flux or SDXL
  2. Write a prompt: Click the text box, type: "A professional product photo of a MacBook Pro on a wooden desk, studio lighting, 4K"
  3. Set steps: 20–30 steps (higher = better quality, slower)
  4. Click "Queue Prompt"
  5. Wait 1–3 minutes (depends on GPU and model)
  6. Image appears in "Gallery" panel on right

Result: Your first AI image, generated locally, at zero cost.

7-step ComfyUI installation guide

Optimizing for Speed & Quality

For Fast Generations (8–10 sec, SDXL)

Use these settings:

  • Model: SDXL
  • Steps: 20
  • Sampler: DPM++ 2M Karras
  • CFG: 7.5
  • Resolution: 768×512 (instead of 1024×1024)

For Best Quality (1–2 min, Flux)

  • Model: Flux
  • Steps: 30–40
  • Sampler: Euler (Flux recommends this)
  • CFG: 7
  • Resolution: 1024×1024

For Memory-Constrained GPUs (RTX 3060, 4060)

Add flags to run_nvidia_gpu.bat or command line:

python main.py --normalvram --preview-method auto

This reduces VRAM usage by 40–50% (slower generation, but works on 6GB VRAM).

Installing LoRAs & Custom Models

LoRA = small add-ons that fine-tune generation (style, subject, consistency).

  1. Download LoRA from https://civitai.com (search "SDXL LoRA" or "Flux LoRA")
  2. Save to ComfyUI/models/loras/
  3. In ComfyUI, add node: "Load LoRA" → select your LoRA → strength 0.5–1.0

Example:

  • Download "Cinematic Lighting LoRA"
  • Add to your workflow
  • Generate: "A cyberpunk city, cinematic lighting" + LoRA
  • Result: Professional cinematic images

Common Issues & Fixes

"CUDA out of memory"

Fix: Add --normalvram flag to launch command (or use SDXL instead of Flux).

"Model not found" error

Fix: Verify model is in ComfyUI/models/checkpoints/ and file name matches in ComfyUI node.

"Port 8188 already in use"

Fix: Close other ComfyUI instances, or change port:

python main.py --listen 0.0.0.0 --port 8189

Then visit http://127.0.0.1:8189

"Very slow generation (1–2 min for SDXL)"

Fix: Check GPU usage (nvidia-smi in another terminal). If not 100%, try:

  • Use SDXL instead of Flux
  • Lower resolution to 512×512
  • Reduce steps to 15

"PyTorch not finding CUDA"

Fix: Reinstall PyTorch with explicit CUDA:

pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Monetization Ideas with ComfyUI

YouTube Automation: Generate 1 image/second → 1 hour = 3,600 images → 12–24 YouTube Shorts videos/day from batch generation. 1 month = $100–$500/mo in AdSense.

Fiverr Gigs: Sell "AI product mockups" ($30–$100 per image). Generate 5 images/hour = $150–$500/day. Use ComfyUI to batch-generate in 30 min.

Stock Photo Sites: Upload Flux images to Alamy, Shutterstock, Getty (check AI policy — most allow AI-generated images). Earn $0.10–$5 per download.

Social Media: 1 high-quality image/day = 30 images/month. Post to Pinterest (1M+ views possible) → link to blog = traffic + affiliate revenue.

FAQ

Is ComfyUI legal?

Yes. Flux, SDXL, and SD3 are all open-source. You can generate, sell, and monetize images.

Do I need an internet connection?

No. After downloading models, ComfyUI runs fully local and offline.

Can I use ComfyUI on Mac?

Not yet (Metal support pending). Use Hugging Face web UI or DiffusionBee instead.

How long before my GPU pays for itself?

  • RTX 4070 ($600) + 1 year of Fiverr gigs ($500/mo) = paid back in 1.4 months
  • YouTube automation (3,600 images/month) = 12–24 Shorts/day = $100–$500/mo

Can I batch-generate 100 images overnight?

Yes. Use ComfyUI's "Queue" feature to add 100 prompts, then let it run. Process overnight, wake up to 100 images.

What's the difference between Flux, SDXL, and SD3?

  • Flux: Best quality, slowest (1.7 min on RTX 4090), 23.5 GB
  • SDXL: Balanced, fast (8 sec), 6.9 GB
  • SD3: Good quality, medium speed (12 sec), 8.5 GB

Start with SDXL. Upgrade to Flux if you need magazine-quality images.

Can I use ComfyUI for face generation?

Yes, but results look better with LoRAs. For consistency, use "Face Detailer" node (https://github.com/Suzie1/ComfyUI_face_detailer).

Where do I get prompts?

Alex the Engineer

Alex the Engineer

Founder & AI Architect

Senior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.

Related Articles