ComfyUI Canny ControlNet: Edge-Based Image Control Guide
How to use Canny ControlNet in ComfyUI to generate images that follow the edge structure of a reference photo — with parameter tuning tips and troubleshooting.
What is Canny ControlNet?
Canny ControlNet uses the Canny edge detection algorithm to extract outlines from an image, then uses those outlines to guide AI generation. The result is an image that follows the structural contours of your reference while applying a completely different style, subject, or mood.
It's the most popular ControlNet type because it works well for nearly any subject — architecture, characters, products, landscapes.
| Strength | Best For |
|---|---|
| Structure preservation | The output closely matches the outlines of the original image |
| High flexibility | Adjust edge detection thresholds to capture more or fewer details |
| Wide application | Works with sketches, photos, architectural plans, product shots |
| Predictable results | More stable and consistent than most other ControlNet types |
Prerequisites
Models
| Model | File | Download |
|---|---|---|
| SD1.5 checkpoint | dreamshaper_8.safetensors | Civitai |
| Canny ControlNet | control_v11p_sd15_canny.pth | HuggingFace |
| VAE (optional) | vae-ft-mse-840000-ema-pruned.safetensors | HuggingFace |
File Placement
ComfyUI/
├── models/
│ ├── checkpoints/
│ │ └── dreamshaper_8.safetensors
│ ├── controlnet/
│ │ └── control_v11p_sd15_canny.pth
│ └── vae/
│ └── vae-ft-mse-840000-ema-pruned.safetensorsComfyUI includes a built-in Canny node, so you don't need any extra plugins for this workflow. For other ControlNet types (Depth, OpenPose), you'll need the ControlNet Auxiliary Preprocessors plugin.
Building the Workflow
Node Setup
- Load Image — load your reference photo
- Canny — extracts edge lines from the image
- Preview Image — (optional) preview the edge detection result before generating
- Load Checkpoint — loads the SD1.5 model
- Load ControlNet Model — loads
control_v11p_sd15_canny.pth - Apply ControlNet (Advanced) — connects the Canny output to your prompt conditioning
- CLIP Text Encode (x2) — positive and negative prompts
- Empty Latent Image — sets the output resolution (512x512 for SD1.5)
- KSampler — generates the image
- VAE Decode → Save Image
Connections
Load Image → Canny → Apply ControlNet (image input)
Load ControlNet Model → Apply ControlNet (control_net input)
CLIP Text Encode (positive) → Apply ControlNet (positive input)
CLIP Text Encode (negative) → Apply ControlNet (negative input)
Apply ControlNet outputs → KSampler conditioning inputsKey Parameters
Canny Node
| Parameter | Range | Recommended | Effect |
|---|---|---|---|
| low_threshold | 0.0–1.0 | 0.3–0.4 | Lower = more edges detected. Too low creates noise |
| high_threshold | 0.0–1.0 | 0.6–0.7 | Higher = only strong edges kept. Too high loses detail |
Always preview the Canny output before generating. If the edge map is too noisy, raise the thresholds. If important structures are missing, lower them.
Apply ControlNet (Advanced)
| Parameter | Recommended | Effect |
|---|---|---|
| strength | 0.7–1.0 | How strictly the output follows the edges |
| start_percent | 0.0 | When ControlNet begins influencing the generation |
| end_percent | 0.8–1.0 | When ControlNet stops. Lowering gives more creative freedom |
KSampler
| Parameter | Recommended |
|---|---|
| steps | 20–30 |
| cfg | 7–8 |
| sampler_name | dpmpp_2m |
| scheduler | karras |
Tips for Better Results
Prompt alignment — Write prompts that relate to the content of your reference image. If the reference is a building, prompt for architecture. Conflicting prompts and references produce poor results.
Threshold tuning — For detailed subjects (jewelry, machinery), use lower thresholds (0.2/0.5) to capture fine detail. For simple subjects (portraits, landscapes), use higher thresholds (0.4/0.7) for cleaner edges.
Strength adjustment — Start at 1.0, then lower to 0.7–0.8 if the result feels too constrained. Lower strength lets the AI interpret the edges more freely.
Common Issues and Fixes
Output looks like a traced outline with no creativity
- Lower
strengthto 0.6–0.7 - Set
end_percentto 0.7 so the AI adds its own detail in later steps - Use more descriptive prompts to give the AI room for stylistic interpretation
Too many edges — output is messy
- Increase both
low_thresholdandhigh_thresholdon the Canny node - Preview the edge map — it should show clean structure lines, not noise
Important edges are missing
- Lower the threshold values
- Use a higher-resolution input image — blurry photos produce poor edges
Output ignores the edges completely
- Verify you're using the correct ControlNet model (
control_v11p_sd15_canny.pthfor SD1.5) - Check that the Apply ControlNet node is connected correctly to both the conditioning and the KSampler
Related Guides
- ControlNet Overview — All ControlNet types explained
- Depth ControlNet — Spatial depth control
- OpenPose ControlNet — Human pose control
ComfyUI ControlNet Guide: Precise Control Over AI Image Generation
Learn what ControlNet is, how it works in ComfyUI, which control types exist, and how to set up your first ControlNet workflow step by step.
ComfyUI Depth ControlNet: Control Spatial Layout and Perspective
How to use Depth ControlNet in ComfyUI to preserve spatial relationships, perspective, and scene layout when generating AI images.
Wonderful Launcher ドキュメント