installation
Portable Package
Install ComfyUI using the official portable package — no installer needed.
The portable package is a pre-built, self-contained version of ComfyUI. Download, extract, and run — no Python installation or system changes required.
This method supports NVIDIA, AMD, and CPU modes. See GPU Compatibility for details on each.
Step 1: Download
Go to the ComfyUI GitHub releases page and download the portable package:
ComfyUI_windows_portable_nvidia.7z— For NVIDIA GPUs (CUDA 13.0, Python 3.13)ComfyUI_windows_portable_nvidia_cu126.7z— For older NVIDIA GPUs (CUDA 12.6)
The file is around 1-2 GB.
Step 2: Extract
Use 7-Zip
You must use 7-Zip to extract the archive. The Windows built-in zip extractor may fail due to long file paths.
- Right-click the downloaded
.7zfile - Select 7-Zip → Extract to "ComfyUI_windows_portable/"
- Wait for extraction to complete (may take a few minutes)
If extraction fails: Right-click the .7z file → Properties → check Unblock at the bottom → click Apply, then try again.
Path rules (important!):
- Choose a location with at least 50 GB of free space
- Avoid spaces and special characters in the path (e.g.,
D:\ComfyUIis good,D:\My Programs\ComfyUI!is bad) - Do NOT extract to system folders like
C:\Program Files,C:\Windows, orC:\root - Do NOT run as Administrator — this can cause permission issues with Python packages
- Keep the path short to avoid Windows 260-character path limit (see Long Paths fix)
Step 3: Download a Model
Before you can generate images, you need at least one checkpoint model. Download one and place it in the correct folder:
ComfyUI_windows_portable/
└── ComfyUI/
└── models/
└── checkpoints/ ← Put your .safetensors file here
Popular starter models:
- SD 1.5 (~2 GB) — fast, low VRAM, huge community
- SDXL (~7 GB) — higher quality, needs 8 GB+ VRAM
- Flux Schnell (~12 GB) — latest generation, needs 12 GB+ VRAM
See Download Models for download links and details.
Step 4: Run
Double-click the appropriate batch file in the root folder:
| File | When to Use |
|---|---|
run_nvidia_gpu.bat |
NVIDIA GPU (most users) |
run_cpu.bat |
No dedicated GPU, or AMD GPU with DirectML |
A console window will open showing startup logs. Once you see:
Starting server
To see the GUI go to: http://127.0.0.1:8188
Open http://127.0.0.1:8188 in your browser (Chrome recommended).
Updating
To update ComfyUI:
- Navigate to the
update/folder inside the portable directory - Double-click
update_comfyui.bat - Wait for the update to complete
- Restart ComfyUI
To update Python packages (PyTorch, etc.), use the batch files in the update/ folder.
Folder Structure
ComfyUI_windows_portable/
├── python_embeded/ # Bundled Python (do NOT modify)
├── update/ # Update scripts
│ ├── update_comfyui.bat
│ └── update_comfyui_stable.bat
├── ComfyUI/
│ ├── models/ # AI models
│ │ ├── checkpoints/ # Main models (.safetensors)
│ │ ├── loras/ # LoRA files
│ │ ├── vae/ # VAE files
│ │ ├── controlnet/ # ControlNet models
│ │ ├── upscale_models/ # Upscaler models
│ │ └── clip/ # CLIP models
│ ├── custom_nodes/ # Community plugins
│ ├── input/ # Input images for img2img
│ ├── output/ # Generated images
│ └── extra_model_paths.yaml # Share models with other tools
├── run_nvidia_gpu.bat
└── run_cpu.bat
Sharing Models with A1111 / Forge
If you already have models from other AI tools, you don't need to download them again.
Method 1: extra_model_paths.yaml
Edit ComfyUI/extra_model_paths.yaml to point to your existing model folders:
a111:
base_path: D:/stable-diffusion-webui/ # Your A1111 path
checkpoints: models/Stable-diffusion
vae: models/VAE
loras: models/Lora
upscale_models: models/ESRGAN
controlnet: models/ControlNet
Restart ComfyUI after saving.
Method 2: Symbolic Links (Symlinks)
If you want ComfyUI to use an existing model folder directly, you can create a Windows symbolic link. This is more efficient than copying files:
mklink /D "ComfyUI\models\checkpoints" "D:\my-models\checkpoints"
This makes ComfyUI see models in D:\my-models\checkpoints as if they were in its own folder, without duplicating files.
Note
Creating symlinks on Windows may require Developer Mode enabled, or running the command prompt as Administrator (just for the mklink command, not for running ComfyUI itself).
Troubleshooting
| Problem | Solution |
|---|---|
| Browser shows title but no UI | Update to the latest version of Chrome |
CUDA out of memory error |
Close other GPU apps, or add --lowvram to the bat file |
torch is not compiled with CUDA |
You downloaded the wrong package. Use the NVIDIA version |
| Red nodes in workflows | Missing custom nodes. See Install Custom Nodes |
| Slow generation on NVIDIA GPU | Make sure you're running run_nvidia_gpu.bat, not run_cpu.bat |
Adding Launch Arguments
To add flags like --lowvram, edit the .bat file with a text editor. Find the line with main.py and add your flags:
.\python_embeded\python.exe -s ComfyUI\main.py --lowvram
Common flags:
| Flag | Effect |
|---|---|
--lowvram |
Use less GPU memory (slower) |
--cpu |
Run entirely on CPU |
--listen |
Allow access from other devices on your network |
--port 8189 |
Change the web server port |
--highvram |
Keep models in VRAM (faster if you have enough) |
Wonderful Launcher Docs