Match-and-Fuse: Consistent Generation from Unstructured Image Sets

Weizmann Institute of Science

Given a source set of images depicting shared objects in varied settings (e.g., pose, environment, viewpoint), our method, Match-and-Fuse, jointly generates an output set in which the consistency among the shared content is preserved. The output adheres to the user-provided prompts that describe the target shared content \(\mathcal{P}^{shared}\) and the scene's style/theme \(\mathcal{P}^{theme}\).

Abstract

We present Match-and-Fuse – a zero-shot, training-free method for consistent controlled generation of unstructured image sets – collections that share a common visual element, yet differ in viewpoint, time of capture, and surrounding content. Unlike existing methods that operate on individual images or densely sampled videos, our framework performs set-to-set generation: given a source set and user prompts, it produces a new set that preserves cross-image consistency of shared content. Our key idea is to model the task as a graph, where each node corresponds to an image and each edge triggers a joint generation of image pairs. This formulation consolidates all pairwise generations into a unified framework, enforcing their local consistency while ensuring global coherence across the entire set. This is achieved by fusing internal features across image pairs, guided by dense input correspondences, without requiring masks or manual supervision. It also allows us to leverage an emergent prior in text‑to‑image models that encourages coherent generation when multiple views share a single canvas. Match-and-Fuse achieves state-of-the-art consistency and visual quality, and unlocks new capabilities for content creation from image collections.

Method

The input to our method is an unstructured set of \(N\) images along with user-provided prompts: \(\mathcal{P}^{shared}\) and \(\mathcal{P}^{theme}\), describing the target shared content and general style or theme, respectively. Our method outputs \(N\) images that preserve the source semantic layout while ensuring visual consistency across shared elements.

We build on a pre-trained, frozen, depth-conditioned T2I model. Although designed for single-image generation, these models have been shown to produce image grids when prompted with joint layouts (e.g., "Side-by-side views of..."), establishing cross-image relationships as demonstrated in recent work [1, 2]. However, this emerged capability, which we refer to as the grid prior, exhibits several key limitations: (i) it provides only partial consistency in appearance, shape, and semantics; (ii) the consistency deteriorates rapidly as more images are composed; and (iii) generating a single canvas is bounded by the model’s native resolution, limiting scalability.

Our method leverages the grid prior while overcoming its core limitations. Specifically, we model the image set as a Pairwise Consistency Graph, comprising of all possible two-image grid generations. This allows us to exploit the strong inductive bias of the grid prior, while eliminating its scale limitation. To enhance visual consistency both within each image grid and across grids, we perform joint feature manipulation across all pairwise generations. To this end, we utilize dense 2D correspondences from the source set to automatically identify shared regions – without requiring object masks – and enforce fine-grained alignment. We find that feature-space similarity along these matches correlates strongly with visual coherence, motivating the use of Multiview Feature Fusion. We further refine details via Feature Guidance, using a feature-matching objective.

Overview of our method pipeline
Method pipeline. Example for 4 images. In pre-processing, pairwise matches are computed between all inputs, and per-image prompts are generated from the set-level prompts. At each denoising step, noisy image latents form a Pairwise Consistency Graph, whose edges \(z^{t}_{ij}\) are jointly denoised with Multiview Feature Fusion (MFF) and aggregated back into per-image latents \(z^{t-1}_i\) by averaging over adjacent edges. The latents are further refined with Feature Guidance via a feature-level matching objective.
Overview of our method pipeline
MFF Denoising step. (a) Two-image grids on all edges are denoised with a frozen DiT. Selected blocks average K,V along adjacent edges into \(\bar{\mathbf{f}}_i\), which are then fused via source matches (b). Images are fused jointly, illustrated by arrows for \(i{=}1\).

Results

Match-and-Fuse generates consistent content of rigid and non-rigid shared elements, single and multi-subject, with shared or varying background, preserving fine-grained consistency in textures, small details, and typography. Notably, it can generate consistent long sequences.

Sketch to Storyboard

Match-and-Fuse generalizes to sketched inputs, enabling controlled storyboard generation.

Localized Editing

Match-and-Fuse enables consistent, localized editing without \(\mathcal{P}^{theme}\), achieved through integration with FlowEdit [3].

Comparisons

Use toggle to compare to baselines.

References


[1] Lianghua Huang, Wei Wang, Zhi-Fan Wu, Yupeng Shi, Huanzhang Dou, Chen Liang, Yutong Feng, Yu Liu, and Jingren Zhou. In-context lora for diffusion transformers. 2024. 2, 6, 7, 11
[2] Chaehun Shin, Jooyoung Choi, Heeseung Kim, and Sungroh Yoon. Large-scale text-to-image model with inpainting is a zero-shot subject-driven image generator.
[3] Vladimir Kulikov, Matan Kleiner, Inbar Huberman-Spiegelglas, and Tomer Michaeli. Flowedit: Inversion-free text-based editing using pre-trained flow models. arXiv preprint arXiv:2412.08629, 2024. 8
      

BibTeX


@article{matchandfuse2025,
  title={Match-and-Fuse: Consistent Generation from Unstructured Image Sets},
  author={Feingold, Kate and Kaduri, Omri and Dekel, Tali},
  year={2025}
}