Sketch Filling Fusion for Multimodal Inpainting
- Sketch Filling Fusion (SFF) is a multi-input image composition framework that fuses binary sketches and reference images to guide precise, user-driven inpainting.
- It employs a structure-aware UNet with dual conditioning branches, using FiLM and CLIP-based cross-attention to enforce both structural and content fidelity.
- Quantitative evaluations show SFF reduces pixel errors and FID scores, demonstrating superior performance over traditional inpainting methods.
Sketch Filling Fusion (SFF) is a multi-input-conditioned image composition framework designed to enable precise, user-driven image manipulation and inpainting by fusing sketch-based structural guidance with reference image-based content transfer. SFF fine-tunes a pre-trained latent diffusion model, integrating a binary sketch and a reference exemplar image to control the completion of missing regions in images at both the structural and textural levels. This approach achieves enhanced editability and fine-grained control, demonstrated by superior quantitative and qualitative results in targeted inpainting and composition tasks (Kim et al., 2023).
1. Model Inputs, Preprocessing, and Latent Encoding
SFF operates on four distinct inputs: a partially observed source image () with a masked region, a binary mask () designating the region to fill, a binary sketch () providing edge-level structure, and a reference exemplar image () offering content fidelity. Sketches are extracted via PiDiNet edge detection and binarized. Masks, sampled as rectangles or free-form, are used for both training and user-driven inference. The exemplar crop is derived from the ground-truth image per the mask for training, while at inference, users provide custom reference images.
All images are encoded using an autoencoder (taken from Stable Diffusion) into downsampled latent representations. The forward noising process in latent space is defined as:
where , . At each synthesis step, the mask and sketch are concatenated with noisy latents to form the input to the core model.
2. Structure-aware UNet Architecture
The central building block of SFF is a structure-aware UNet (), which augments standard denoising UNet architectures with dual conditioning branches:
- Reference Branch: A frozen CLIP encoder (ResNet-50 or ViT) processes into an embedding , followed by a 2-layer MLP to yield a conditioning vector . This vector is injected into all UNet blocks via cross-attention, replacing text cross-attention tokens with .
- Sketch Branch: A shallow CNN (stacked convolutions with ReLU) lifts the binary sketch to a feature map . This map modulates convolutional features at every layer via FiLM:
where , are convolutions over .
The mask is both concatenated to the input tensor and early UNet feature maps, distinctly identifying which regions to fill versus preserve.
3. Conditioning Mechanisms and Sampling Schedule
Reference and sketch conditioning act independently and jointly within the UNet, enabling the fusion of high-level appearance cues and low-level structure control:
- Reference Embedding: Enables pixel-wise content transfer from to the masked region via CLIP-based cross-attention at every UNet block.
- Sketch Fusion: Enforces local, edge-level fidelity through FiLM modulation at all convolutional layers, ensuring the output respects user-defined structure.
- Mask Guidance: Guides the model to localize inpainting strictly to the user-specified mask.
- Sketch Plug-and-Drop: Optionally disables FiLM modulation after a specified timestep during DDPM sampling, which improves naturalness by partially relaxing rigid sketch constraints when structure is too coarse.
During sampling, the reverse denoising process is conditioned as:
The input to UNet is .
4. Training Protocol and Hyperparameters
SFF is trained on the Danbooru cartoon subset, comprising 55,104 training and 13,775 test samples, where edge maps for sketches are generated using PiDiNet. Masks cover 5–30% of image area per sample, with both rectangular and free-form shapes.
Initialization uses Paint-by-Example weights based on Stable Diffusion. Training is performed for 40 epochs on 4 NVIDIA V100 GPUs (2 days). The model is optimized using AdamW with a learning rate of , weight decay $0.01$, and batch size of 4 at an image size of . The number of diffusion steps is set to 1,000 with a linear schedule.
The training objective is a noise prediction loss:
with .
5. Quantitative and Qualitative Evaluation
SFF demonstrates quantifiable improvements over single-modality and multimodal inpainting baselines. Ablations comparing reference-only (Paint-by-Example), text+sketch (Paint-by-T+S), and reference+sketch (SFF) show significant performance gains:
| Metric | Paint-by-E | Paint-by-T+S | SFF (Ref+Sketch) |
|---|---|---|---|
| L₁ error | 0.0866 | 0.0851 | 0.0680 |
| L₂ error | 0.0380 | 0.0313 | 0.0239 |
| FID | 6.314 | 6.314 | 5.716 |
Sketch guidance reduces pixel error by 20–30% and lowers FID by ~10%. LPIPS scores for SFF (0.15) also outperform reference-only baselines (0.20), indicating greater perceptual fidelity.
Qualitative highlights include:
- Precise edge placement (hair and clothing boundaries) dictated by the sketch.
- Consistency and transfer of patterns and colors from the reference image.
- Flexible editing: swapping sketches and reference exemplars enables the synthesis of arbitrary objects or scene modifications.
- Relaxed sampling (sketch plug-and-drop) yields visually plausible backgrounds even when sketches are coarse.
Use cases demonstrated include background scene extension in Webtoon panels, local object shape editing (hair, beard), and multi-reference object replacement (e.g., swapping shirt patterns).
6. Applications, Extensibility, and Implications
SFF supports a range of use cases for controllable image manipulation—especially in multimodal inpainting scenarios—without sacrificing edge or content fidelity. The fusion of sketch and reference conditioning enables practitioners to reproduce the approach, integrate novel sketch or exemplar encoders, and extend the framework to new tasks in composition and manipulation. A plausible implication is that structure-aware fusion modules like FiLM on sketch features may generalize to other domains requiring strict spatial control. The “plug-and-drop” mechanism offers adaptable structure enforcement, balancing rigidity and realism in composition.
Practitioners can adopt the SFF pipeline for tasks requiring fine-grained user-driven edits, compositional inpainting, or synthesis of new objects and backgrounds via interactive sketch and reference fusion (Kim et al., 2023).