frequent-errors
ONNX / ONNXRuntime Missing in ComfyUI
How to fix "No module named onnx" or "onnxruntime" errors when starting ComfyUI.
Symptoms
When ComfyUI starts, the terminal log shows warnings or errors like these:
WanVideoWrapper WARNING: FantasyPortrait nodes not available: No module named 'onnx'
No module named 'onnxruntime'
DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device
Cause
onnx is an open machine-learning model interchange library used to load .onnx model files. onnxruntime is Microsoft's inference engine that actually runs those models. The two are often needed together.
The following ComfyUI custom nodes depend on onnx or onnxruntime:
- ComfyUI-WanVideoWrapper — FantasyPortrait stylization
- comfyui_controlnet_aux — DWPose pose detection (requires onnxruntime)
- ComfyUI-ReActor — face swap (requires onnxruntime + insightface)
- ComfyUI_IPAdapter_plus — FaceID (indirect dependency via insightface)
- ComfyUI_InstantID — identity-consistent face generation (indirect dependency via insightface)
- ComfyUI-PuLID — face preservation (indirect dependency via insightface)
When these libraries are missing, nodes that depend on them will be unavailable, but ComfyUI itself and other custom nodes will still work normally.
Severity
Medium — You only need to install these if you use the features listed above. If you don't, the warning can be safely ignored.
Solutions
Option 1: Install the CPU version (recommended for most users)
- Open the Environments page in Wonderful Launcher
- Find the Terminal area
- Enter the following command and press Enter:
pip install onnx onnxruntime - Wait for the installation to finish, then restart ComfyUI
This is sufficient for most use cases. Features like DWPose work fine in CPU mode — just a bit slower.
Option 2: Install the GPU-accelerated version (NVIDIA GPU users)
If you have an NVIDIA GPU, you can install the GPU version for faster inference:
pip install onnx onnxruntime-gpu
Important: Do not install both onnxruntime and onnxruntime-gpu at the same time. onnxruntime-gpu already includes CPU inference support. If you previously installed onnxruntime, uninstall it first:
pip uninstall onnxruntime -y
pip install onnxruntime-gpu
Common installation issues
GPU not working after installing onnxruntime-gpu
Error message:
[W:onnxruntime:Default, onnxruntime_pybind_state.cc] LoadLibrary failed with error 126 when trying to load onnxruntime_providers_cuda.dll
Cause: The CUDA/cuDNN version required by onnxruntime-gpu does not match the version used by PyTorch in your environment.
Starting from onnxruntime-gpu 1.19, CUDA 12.x + cuDNN 9.x is required by default. If your PyTorch uses the older cuDNN 8.x (PyTorch 2.3 and below), this conflict will occur.
Solution: For most ComfyUI users, the CPU version of onnxruntime is sufficient. If you do need GPU acceleration, make sure your PyTorch version and onnxruntime-gpu version use the same major CUDA version:
| PyTorch version | cuDNN version | Compatible onnxruntime-gpu |
|---|---|---|
| 2.4 and above | cuDNN 9.x | 1.19+ (default pip install) |
| 2.3 and below | cuDNN 8.x | 1.18.x |
Example of installing a specific version:
pip install onnxruntime-gpu==1.18.1
Both onnxruntime and onnxruntime-gpu installed
Symptom: GPU inference fails to activate, or unpredictable behavior occurs.
Solution: Keep only one. Check what is currently installed:
pip list | findstr onnxruntime
If both are present, uninstall and reinstall the version you need:
pip uninstall onnxruntime onnxruntime-gpu -y
pip install onnxruntime-gpu
protobuf version conflict
Error message:
TypeError: Descriptors cannot be created directly.
Solution:
pip install protobuf>=3.20
Still not resolved?
If you encounter other errors during installation, contact support through the Contact Us button in the app and include the full error message from the terminal.
Documentation Wonderful Launcher