Your GPU is there, but ComfyUI refuses to use it. Here's every reason torch.cuda.is_available() returns False.
You have an NVIDIA GPU, but ComfyUI generates at a crawl. Check:
python -c "import torch; print(torch.cuda.is_available())"Prints False. Your GPU is idle.
A plain pip install torch gives you CPU-only. You need:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu126Check your driver with nvidia-smi first, then match CUDA version:
| Driver | CUDA | URL suffix |
|---|---|---|
| 560+ | 12.6 | cu126 |
| 570+ | 12.8 | cu128 |
| 575+ | 13.0 | cu130 |
Some plugins' requirements replace your CUDA PyTorch with CPU-only. Fix: reinstall CUDA PyTorch, use --no-deps for risky plugins.
Portable package: use run_nvidia_gpu.bat, not run_cpu.bat.
Update from nvidia.com.
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130See GPU Compatibility for the full matrix.
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"Should print True + your GPU name. Still False? Book a remote session.
뉴스레터
최신 뉴스와 업데이트를 받아보실 수 있는 뉴스레터를 구독하세요