after-install
Download Models
Where to find and how to install AI models for ComfyUI.
ComfyUI needs AI model files to generate images and videos. This guide covers where to download them and where to put them.
Model Types
| Type | Folder | What It Does | File Size |
|---|---|---|---|
| Checkpoint | models/checkpoints/ |
The main generation model | 2-12 GB |
| LoRA | models/loras/ |
Fine-tuned style/concept add-on | 10-200 MB |
| VAE | models/vae/ |
Image encoder/decoder | 300-800 MB |
| ControlNet | models/controlnet/ |
Guides generation with reference images | 700 MB-1.5 GB |
| Upscaler | models/upscale_models/ |
Increases image resolution | 20-200 MB |
| CLIP | models/clip/ |
Text encoder | 200 MB-2 GB |
| UNET | models/diffusion_models/ |
Core diffusion model (Flux, etc.) | 5-24 GB |
Where to Download
Hugging Face
The largest open-source model hub. Most official model releases happen here.
- Browse: huggingface.co/models
- Search for models by name and download
.safetensorsfiles
CivitAI
Community-driven platform with thousands of fine-tuned models, LoRAs, and embeddings.
- Browse: civitai.com
- Filter by Type (Checkpoint, LoRA, etc.) and Base Model (SD 1.5, SDXL, Flux)
Recommended Starter Models
For Image Generation
| Model | Base | VRAM Needed | Best For |
|---|---|---|---|
| Stable Diffusion 1.5 | SD 1.5 | 4 GB+ | Fast generation, huge plugin ecosystem |
| RealVisXL | SDXL | 8 GB+ | Photorealistic images |
| Flux Schnell | Flux | 12 GB+ | Fast, high-quality generation |
| Flux Dev | Flux | 12 GB+ | Higher quality, slower |
For Video Generation
| Model | VRAM Needed | Notes |
|---|---|---|
| Wan 2.1 | 12 GB+ (quantized) | Text-to-video and image-to-video |
| HunyuanVideo | 16 GB+ | High-quality video generation |
| LTX-Video | 8 GB+ | Lightweight video model |
How to Install
- Download the
.safetensorsfile (avoid.ckptfiles when possible —.safetensorsis safer) - Place it in the correct subfolder under
models/:
ComfyUI/
└── models/
├── checkpoints/ ← Main models go here
├── loras/ ← LoRA files go here
├── vae/ ← VAE files go here
├── controlnet/ ← ControlNet models go here
└── upscale_models/ ← Upscaler models go here
- Restart ComfyUI (or click Refresh in the model dropdown)
Using GGUF Quantized Models
If you have limited VRAM, you can use GGUF quantized versions of large models (like Flux). These use less memory at a small quality cost.
Quantized models require the ComfyUI-GGUF custom node:
- Install the ComfyUI-GGUF node (see Install Custom Nodes)
- Download a GGUF version of your model (e.g.,
flux1-schnell-Q4_K_S.gguf) - Place it in
models/diffusion_models/ - Use the GGUF Loader node instead of the regular Load Checkpoint node
Tips
.safetensorsvs.ckpt— Always prefer.safetensors. The.ckptformat can contain executable code and is a security risk.- Check the base model — A LoRA trained on SD 1.5 won't work with SDXL checkpoints. Always match the base model.
- Organize subfolders — You can create subfolders inside
models/checkpoints/(e.g.,models/checkpoints/sdxl/). ComfyUI scans subdirectories automatically. - Share models — If you also use A1111 or Forge, configure
extra_model_paths.yamlto share model files. See Portable Package setup. - Check your VRAM — Not sure if a model will run on your GPU? See System Requirements and GPU Compatibility for VRAM guidance.
Next Steps
- Generate your first image
- Install custom nodes
- Common issues — troubleshooting model loading errors
Documentation Wonderful Launcher