ComfyUI 3D Generation Guide: Create 3D Models with Hunyuan3D
How to generate 3D models from images in ComfyUI using Tencent's Hunyuan3D 2.0 — native workflow and Kijai wrapper setup, model downloads, and tips.
What is Hunyuan3D?
Hunyuan3D 2.0 is Tencent's open-source 3D generation model that creates high-fidelity 3D models from images, text descriptions, or sketches. It uses a two-stage process:
- Geometry generation (Hunyuan3D-DiT) — creates the 3D shape from your input
- Texture synthesis (Hunyuan3D-Paint) — adds high-resolution (4K) textures to the geometry
The model can generate a complete 3D model in as fast as 30 seconds.
Model Variants
| Model | Parameters | Purpose |
|---|---|---|
| Hunyuan3D-DiT-v2-0 | 1.1B | Single-view image to 3D shape |
| Hunyuan3D-DiT-v2-mv | 1.1B | Multi-view images to 3D shape |
| Hunyuan3D-DiT-v2-mv-turbo | 1.1B | Fast multi-view (distilled) |
| Hunyuan3D-DiT-v2-mini | 0.6B | Lightweight version |
| Hunyuan3D-Paint-v2-0 | 1.3B | Texture generation |
Two Approaches in ComfyUI
Option A: ComfyUI Native Support (Simpler)
ComfyUI now natively supports Hunyuan3D. No extra plugins needed — just update to the latest version. Generates geometry without textures (voxel-style output).
Best for: Quick experiments, users who want simple setup.
Option B: ComfyUI-Hunyuan3DWrapper (Full Featured)
Kijai's wrapper plugin provides complete geometry + texture generation. Requires compiling an additional component.
Best for: Production-quality 3D models with textures.
Option A: Native ComfyUI Workflow
Setup
Update ComfyUI to the latest version. Find the Hunyuan3D workflow templates in Workflows → Browse Templates → 3D.
Model Download
| Model | Location | Download |
|---|---|---|
| hunyuan3d-dit-v2-mv.safetensors | models/checkpoints/ | HuggingFace (rename after download) |
For faster generation, use the turbo version:
Running the Workflow
- Load the model in the Image Only Checkpoint Loader node
- Load your input image(s) — use images with clean backgrounds (white or transparent)
- Run the workflow
- Output
.glbfiles are saved toComfyUI/output/mesh/
For best results, remove the background from your input images before feeding them to Hunyuan3D. White or transparent backgrounds produce cleaner geometry.
Option B: Kijai Wrapper (With Textures)
Plugin Installation
Install via ComfyUI Manager:
Texture Component Compilation
After installing the plugin, you need to compile the texture rendering component. This requires basic familiarity with terminal/command line.
For ComfyUI Desktop:
- Open the terminal panel in Desktop (toggle button → bottom panel → terminal tab)
- Navigate to the wrapper directory and install the wheel:
cd custom_nodes/ComfyUI-Hunyuan3DWrapper/wheels
pip install custom_rasterizer-0.1-cp312-cp312-win_amd64.whlFor Portable ComfyUI:
python_embeded\python.exe -m pip install ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\wheels\custom_rasterizer-0.1-cp312-cp312-win_amd64.whlChoose the wheel file that matches your Python/CUDA version. If no pre-compiled wheel matches, you'll need to compile from source — see the plugin README for instructions.
Model Download
| Model | Location | Download |
|---|---|---|
| hunyuan3d-dit-v2-0-fp16.safetensors | models/diffusion_models/ | Kijai |
The texture model and delight model are downloaded automatically by the workflow.
Running the Workflow
- Load
hunyuan3d-dit-v2-0-fp16.safetensorsin the Hy3DModelLoader node - Load your input image
- Run — the workflow generates both geometry and textured output
- Output saved to
ComfyUI/output/3D/
Input Image Tips
The quality of your 3D output depends heavily on your input image:
- Clean backgrounds — white or transparent backgrounds produce the best results
- Clear subject — the object should be well-lit and clearly visible
- Single object — one subject per image works best
- Multiple angles (for multi-view) — front, side, and back views improve accuracy
Generating from Text
Hunyuan3D takes image input, not text. To generate 3D from text:
- Generate an image from your text prompt using any text-to-image workflow
- Feed that image into the Hunyuan3D workflow
For multi-view input, the ComfyUI-MVAdapter plugin can generate multi-angle images from a single image.
Common Issues and Fixes
custom_rasterizer error (Kijai wrapper)
- The texture component isn't compiled — follow the compilation steps above
- Make sure the wheel file matches your Python and CUDA versions
Output geometry is distorted
- Use a cleaner input image with a white/transparent background
- Ensure the subject fills most of the frame
- Try the multi-view workflow with images from multiple angles
Slow generation
- Use the turbo model variant (hunyuan3d-dit-v2-mv-turbo)
- The fast variant (hunyuan3d-dit-v2-0-fast) halves inference time
- Mini variant (v2-mini, 0.6B parameters) is fastest but lower quality
3D Pack plugin won't install
- ComfyUI 3D Pack has dependency conflicts with the latest ComfyUI
- Use the native workflow or Kijai wrapper instead
- If you need 3D Pack specifically, try Comfy3D-WinPortable
Online Alternatives
If you want to try Hunyuan3D without local setup:
Related Guides
- Text to Image — Generate input images for 3D conversion
- Install Custom Nodes — Plugin installation guide
ComfyUI FramePack Guide: Generate Videos with Just 6GB VRAM
How to use FramePack in ComfyUI for low-VRAM video generation — setup, first-last frame workflows, and comparison of available custom nodes.
Workflow Environment Setup
A battle-tested SOP for getting any ComfyUI workflow running on Windows — from zero to "only missing models".
Wonderful Launcher ドキュメント