ComfyUI Inpainting Guide: Edit Parts of an Image with AI
Learn how to use inpainting in ComfyUI to selectively modify regions of an image — change expressions, remove objects, swap clothing, and more.
What is Inpainting?
Inpainting lets you modify a specific area of an image while keeping everything else unchanged. You paint a mask over the region you want to change, describe what should go there, and the AI regenerates only that area.
Common uses:
- Fix faces — correct expressions, eye color, or facial features
- Remove objects — erase unwanted items from a scene
- Swap clothing — change outfits on a character
- Add elements — insert new objects into an existing image
- Fix artifacts — repair AI generation errors in specific areas
Prerequisites
You need a checkpoint model and a source image to inpaint. No extra plugins are required — all necessary nodes are built into ComfyUI.
| Model | File | Download |
|---|---|---|
| SD1.5 checkpoint | dreamshaper_8.safetensors | Civitai |
You can use any checkpoint model for inpainting. Some models have dedicated inpainting variants (e.g., sd-v1-5-inpainting.ckpt) that produce smoother blending, but the standard model works for most cases.
Building the Inpainting Workflow
Core Nodes
- Load Checkpoint — loads your base model
- Load Image — loads the image you want to edit
- CLIP Text Encode (positive) — describes what you want in the masked area
- CLIP Text Encode (negative) — describes what to avoid
- VAE Encode — converts the image to latent space
- Set Latent Noise Mask — applies the mask to define the editable region
- KSampler — regenerates the masked area
- VAE Decode → Save Image
Drawing the Mask
After loading your image in the Load Image node:
- Right-click the Load Image node
- Select Open in MaskEditor
- Paint over the area you want to change — white areas will be regenerated, black areas stay untouched
- Click Save to apply the mask
For best results, paint the mask slightly larger than the area you want to change. This gives the AI room to blend the new content smoothly with the surrounding pixels.
The Denoise Parameter
The most important inpainting parameter is denoise on the KSampler node:
| Denoise Value | Effect |
|---|---|
| 0.1–0.3 | Subtle changes — adjusts colors, slight expression shifts. Keeps most of the original content |
| 0.4–0.6 | Moderate changes — can alter features while maintaining overall consistency |
| 0.7–1.0 | Major changes — generates almost entirely new content in the masked area |
Start low (0.3–0.5) and increase gradually until you get the result you want.
Step-by-Step Example: Changing Eye Color
- Load a portrait image
- Open the Mask Editor and paint over both eyes
- Set the positive prompt to:
bright green eyes, detailed iris - Set the negative prompt to:
blurry, low quality, deformed - Set
denoiseto 0.4 - Generate — the eyes change color while the rest of the face stays identical
Writing Good Inpainting Prompts
Unlike text-to-image, inpainting prompts should describe the masked region specifically, not the entire image:
| Goal | Prompt Approach |
|---|---|
| Change eye color | blue eyes, detailed iris, sharp focus |
| Remove an object | Describe what should be there instead: clean background, empty wall, wooden floor |
| Change clothing | red silk dress, elegant, detailed fabric |
| Fix a face | beautiful face, symmetrical, detailed skin, natural lighting |
Avoid prompts that describe the entire image — they can cause the AI to try changing areas outside the mask. Focus your prompt on what belongs inside the masked region.
Alternative Method: VAE Inpainting Encoder
ComfyUI also has a VAE Encode (for Inpainting) node that handles masking differently. Instead of using Set Latent Noise Mask, you connect the image and mask directly to this specialized encoder.
Both methods work. The Set Latent Noise Mask approach described above is simpler and more flexible for beginners. The VAE Inpainting Encoder can produce better edge blending in some cases, especially with dedicated inpainting models.
Common Issues and Fixes
Inpainted area doesn't blend with the rest of the image
- Lower the
denoisevalue — high denoise generates content that looks disconnected - Make the mask slightly larger to include some surrounding context
- Use a dedicated inpainting model (e.g.,
sd-v1-5-inpainting.ckpt)
Output changes areas outside the mask
- This usually means
denoiseis too high — lower it to 0.3–0.5 - Make sure the mask is properly saved in the Mask Editor
Inpainted area is blurry
- Increase
stepsto 25–30 - Raise
cfgslightly (7–9) - Add quality keywords to your prompt:
sharp, detailed, high quality
Nothing changes at all
denoiseis too low — increase to at least 0.3- Verify the mask exists — check that white areas are visible in the Mask Editor
- Make sure the mask is connected to the Set Latent Noise Mask node
Seam visible around the inpainted area
- Expand the mask to include more surrounding pixels
- Lower
denoiseto 0.3–0.4 for smoother blending - Try the VAE Inpainting Encoder method instead
Related Guides
- Text to Image — Basic image generation
- Image to Image — Transform entire images
- Outpainting — Extend images beyond their borders
ComfyUI Апскейл: Лучшие Модели, Настройки и Решение Проблем с VRAM
AI-апскейлинг в ComfyUI — какие модели использовать, как настроить рабочий процесс и как справиться с ошибками нехватки памяти на GPU с малым объёмом VRAM.
ComfyUI Outpainting Guide: Extend Images Beyond Their Borders
How to use outpainting in ComfyUI to expand an image's canvas — extend the field of view, fix cropped compositions, or change the aspect ratio.
Документация Wonderful Launcher