ComfyUI Multi GPU: How to Use Multiple GPUs Without Breaking Workflows
Learn what ComfyUI can and cannot do with multiple GPUs, how to choose a CUDA device, and how to run separate ComfyUI instances on separate ports.
ComfyUI can run on a specific GPU, and you can run multiple ComfyUI instances on different GPUs.
But a normal ComfyUI workflow usually does not become twice as fast just because your machine has two GPUs. Most users get the best result by assigning one ComfyUI instance to one GPU and another instance to another GPU.
Quick Answer
| Goal | Best Setup | Example |
|---|---|---|
| Use GPU 1 instead of GPU 0 | Start ComfyUI with --cuda-device 1 | python main.py --cuda-device 1 |
| Run two queues at the same time | Start two ComfyUI instances on different ports | --cuda-device 0 --port 8188, --cuda-device 1 --port 8189 |
| Keep all GPUs visible but prefer one | Use --default-device only if you know why | python main.py --default-device 1 |
| Make one KSampler split across GPUs | Usually not supported by default | Use a model or node that explicitly supports it |
| Fix VRAM pressure | Use VRAM flags or smaller workflows | --lowvram, smaller resolution, quantized models |
What Multi GPU Means in ComfyUI
There are three different ideas people mix together:
- Choosing one GPU: ComfyUI runs on a selected device.
- Running multiple instances: two browser sessions, two ports, two queues, two GPUs.
- Splitting one workflow across GPUs: a single model or workflow uses more than one GPU at the same time.
The first two are practical. The third is not something most standard ComfyUI workflows do automatically.
Step 1: Check Your GPUs
On Windows or Linux with NVIDIA GPUs, run:
nvidia-smiLook for the GPU index in the left column. The first card is usually 0, the second is usually 1.
Then confirm PyTorch can see CUDA:
python -c "import torch; print(torch.cuda.device_count()); print(torch.cuda.get_device_name(0))"If torch.cuda.device_count() returns 0, you do not have a multi-GPU problem yet. You have a PyTorch/CUDA install problem. Start with GPU Compatibility.
Step 2: Choose One GPU With --cuda-device
For a manual install:
python main.py --cuda-device 1For the Windows portable package, edit or copy your launch .bat file and add the flag after main.py:
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --cuda-device 1The official ComfyUI argument description says --cuda-device sets the CUDA device this instance will use and hides the others from that process.
That means ComfyUI may still print cuda:0 inside the selected process. This can be normal because the chosen GPU becomes the only visible GPU to that process.
Step 3: Run Two Instances on Two GPUs
Use different ports so the two servers do not collide.
Instance A:
python main.py --cuda-device 0 --port 8188Instance B:
python main.py --cuda-device 1 --port 8189Then open:
http://127.0.0.1:8188
http://127.0.0.1:8189This is the most reliable multi-GPU pattern for production work: one queue per GPU.
Use separate user directories for serious parallel work
If both instances are active every day, consider separate --user-directory paths so browser state, workflow tabs, and user settings do not fight each other.
Example:
python main.py --cuda-device 0 --port 8188 --user-directory user-gpu0
python main.py --cuda-device 1 --port 8189 --user-directory user-gpu1Step 4: Understand --default-device
ComfyUI also has a --default-device option. It sets the default device while keeping other devices visible.
That sounds attractive, but it is not the same as automatic multi-GPU execution. Use it only when a workflow, custom node, or advanced setup needs other devices to remain visible.
For most users, --cuda-device is easier to reason about.
Step 5: Do Not Use Multi GPU to Hide a VRAM Problem
If a workflow fails because one GPU runs out of VRAM, adding a second GPU usually will not make the same model magically fit. Try these first:
- reduce image resolution
- reduce batch size
- close other GPU-heavy apps
- use a quantized model when appropriate
- start ComfyUI with
--lowvram - use a smaller model or workflow
See Python Out of Memory in ComfyUI for a deeper diagnosis path.
Common Mistakes
| Mistake | Why It Fails | Better Approach |
|---|---|---|
| Opening two browser tabs on the same server | Both tabs still use the same ComfyUI process | Start a second process on another port |
Adding --cuda-device 1 in the wrong place | The flag is not passed to main.py | Put it after main.py in the launch command |
| Expecting one workflow to split across two GPUs | Standard nodes usually run on one selected device | Use separate queues or specialized nodes |
Running two instances on port 8188 | Only one process can own the port | Use 8188 and 8189 |
| Installing CPU-only PyTorch | ComfyUI cannot use any NVIDIA GPU | Reinstall PyTorch with CUDA |
How to Verify the Setup
Start ComfyUI and read the startup log. You want to see your GPU name near the device line:
Device: cuda:0 NVIDIA GeForce ...Then watch nvidia-smi while generating. If you run two instances, queue a small workflow in each one and confirm each GPU shows activity.
If only one GPU is busy, check:
- which port your browser is using
- whether both processes are still running
- whether each command has a different
--cuda-device - whether the second process failed to start because the port was already occupied
How Wonderful Launcher Helps
Wonderful Launcher is a good fit when you want multiple stable ComfyUI environments instead of one fragile folder:
- keep a clean GPU 0 environment for normal images
- keep a separate GPU 1 environment for experiments or video nodes
- avoid custom node dependency conflicts between queues
- preserve working workflows before changing launch flags
If you are still choosing hardware, start with GPU Compatibility. If ComfyUI launches but the browser keeps disconnecting, see ComfyUI Reconnecting Error.
Related Guides
- GPU Compatibility
- Python Out of Memory in ComfyUI
- ComfyUI Reconnecting Error
- ComfyUI Dependency Conflicts
Source References
ComfyUI Dependency Conflicts: Fix Them Without Reinstalling
Diagnose and fix Python dependency conflicts in ComfyUI without destroying a working environment.
Troubleshooting Decision Tree
Systematic diagnosis guide — how to identify whether your ComfyUI problem is a node, dependency, compatibility, or model issue.
Wonderful Launcher Docs