ComfyUI Outpainting Guide: Extend Images Beyond Their Borders
How to use outpainting in ComfyUI to expand an image's canvas — extend the field of view, fix cropped compositions, or change the aspect ratio.
What is Outpainting?
Outpainting extends an image beyond its original borders. The AI generates new content in the expanded area that seamlessly blends with the existing image. This is useful for:
- Expanding the field of view — turn a tight crop into a wider scene
- Fixing composition — add headroom, extend a landscape, or fill missing edges
- Changing aspect ratio — convert a square image to widescreen or portrait
Outpainting works on the same principle as inpainting — the extended area is treated as a masked region that the AI fills in.
Prerequisites
You need a checkpoint model. For best results, use a dedicated inpainting model:
| Model | Purpose | Download |
|---|---|---|
| v1-5-pruned-emaonly.safetensors | Generate initial image | HuggingFace |
| sd-v1-5-inpainting.ckpt | Outpainting (better blending) | HuggingFace |
Place both in ComfyUI/models/checkpoints/.
A dedicated inpainting model produces smoother blending at the edges. You can use a regular model too, but the seam between original and generated content may be more visible.
Building the Outpainting Workflow
The workflow has two stages:
Stage 1: Prepare the Expanded Canvas
- Load Image — load the image you want to extend
- Image Pad for Outpaint — adds blank padding around the image
The Image Pad for Outpaint node has four parameters controlling how many pixels to add in each direction:
| Parameter | Description |
|---|---|
| left | Pixels to extend left |
| top | Pixels to extend upward |
| right | Pixels to extend right |
| bottom | Pixels to extend downward |
Set the directions you want to extend. For example, to make a landscape wider, increase left and right. The node also generates a mask for the padded area.
Stage 2: Generate the Extended Content
- Load Checkpoint — load an inpainting model
- VAE Encode (for Inpainting) — encodes the padded image and mask into latent space
- CLIP Text Encode (positive) — describe the scene (should match the original image's content)
- CLIP Text Encode (negative) — elements to avoid
- KSampler — generates content in the masked area
- VAE Decode → Save Image
Key Principle
Use the same prompt style as the original image. If the original is a landscape photo, prompt for landscape elements. Consistency between the prompt and the existing image is what makes the extension look natural.
Tips for Natural-Looking Results
Keep prompts consistent — The extended area should match the original image's style, subject, and mood. Don't prompt for completely different content.
Use an inpainting model — Dedicated inpainting models handle edge blending much better than standard models.
Start small — Extend by 128–256 pixels at a time rather than trying to double the image in one pass. Multiple small extensions produce more coherent results.
Adjust KSampler parameters:
| Parameter | Recommended | Effect |
|---|---|---|
| steps | 25–30 | More steps = better blending |
| cfg | 6–8 | Too high = visible seam between old and new |
| denoise | 0.8–1.0 | High denoise for the blank area |
Common Issues and Fixes
Visible seam between original and extended area
- Use a dedicated inpainting model
- Lower cfg to 5–7
- Extend in smaller increments
- Adjust the
featheringparameter on the pad node if available
Extended area doesn't match the original style
- Rewrite your prompt to describe the entire scene, not just the new area
- Use similar quality keywords as the original image
- Try a different sampler (euler_ancestral often blends well)
Extended area is blurry compared to the original
- Increase steps to 30+
- Increase the resolution of the padded canvas
- Consider running a second pass with inpainting to refine details
Colors don't match between original and extended areas
- The inpainting model typically handles color matching well — if colors diverge, try a different model
- Add color/lighting keywords to your prompt that match the original
Related Guides
- Inpainting Guide — Selectively edit parts of an image
- Image to Image — Transform existing images
- Upscale Guide — Enlarge images with AI detail enhancement
ComfyUI Inpainting Guide: Edit Parts of an Image with AI
Learn how to use inpainting in ComfyUI to selectively modify regions of an image — change expressions, remove objects, swap clothing, and more.
ComfyUI Embeddings Guide: Textual Inversion for Style & Quality
How to download, install, and use embedding (Textual Inversion) models in ComfyUI to apply styles, improve quality, or add concepts with a single keyword.
Documentation Wonderful Launcher