Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 25 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 134 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

ImagerySearch: Adaptive Test-Time Search for Video Generation Beyond Semantic Dependency Constraints (2510.14847v1)

Published 16 Oct 2025 in cs.CV

Abstract: Video generation models have achieved remarkable progress, particularly excelling in realistic scenarios; however, their performance degrades notably in imaginative scenarios. These prompts often involve rarely co-occurring concepts with long-distance semantic relationships, falling outside training distributions. Existing methods typically apply test-time scaling for improving video quality, but their fixed search spaces and static reward designs limit adaptability to imaginative scenarios. To fill this gap, we propose ImagerySearch, a prompt-guided adaptive test-time search strategy that dynamically adjusts both the inference search space and reward function according to semantic relationships in the prompt. This enables more coherent and visually plausible videos in challenging imaginative settings. To evaluate progress in this direction, we introduce LDT-Bench, the first dedicated benchmark for long-distance semantic prompts, consisting of 2,839 diverse concept pairs and an automated protocol for assessing creative generation capabilities. Extensive experiments show that ImagerySearch consistently outperforms strong video generation baselines and existing test-time scaling approaches on LDT-Bench, and achieves competitive improvements on VBench, demonstrating its effectiveness across diverse prompt types. We will release LDT-Bench and code to facilitate future research on imaginative video generation.

Summary

  • The paper introduces an adaptive, semantic-aware test-time search strategy that dynamically adjusts candidate sampling and reward functions.
  • It significantly improves video quality and semantic alignment under imaginative, long-distance prompts, outperforming baseline methods by approximately 8.83%.
  • The study establishes LDT-Bench and ImageryQA for rigorous benchmarking, laying a foundation for future research in multimodal generative models.

Adaptive Test-Time Search for Imaginative Video Generation: The ImagerySearch Framework

Motivation and Problem Setting

Text-to-video (T2V) generative models have achieved high fidelity in realistic scenarios, but their performance degrades sharply when tasked with imaginative prompts involving rarely co-occurring concepts and long-distance semantic dependencies. This limitation is rooted in both the semantic dependency constraints of current models and the scarcity of imaginative training data. Existing test-time scaling (TTS) methods, such as Best-of-N, particle sampling, and beam search, use static search spaces and reward functions, which restrict their adaptability to open-ended, creative scenarios. Figure 1

Figure 1: Illustration of semantic dependency scenarios; models struggle with long-distance semantics, but ImagerySearch generates coherent, context-aware motions.

ImagerySearch introduces a prompt-guided, adaptive test-time search strategy for video generation, comprising two principal components:

  1. Semantic-distance-aware Dynamic Search Space (SaDSS): The search space is modulated according to the semantic span of the prompt. Semantic distance is computed as the average embedding distance between key entities (objects and actions) in the prompt, using a text encoder (e.g., T5 or CLIP). The number of candidates sampled at each denoising step is dynamically adjusted:

Nt=Nbase⋅(1+λ⋅Dˉsem(p))N_t = N_{\text{base}} \cdot (1 + \lambda \cdot \bar{\mathcal{D}}_{\text{sem}}(\mathbf{p}))

where NbaseN_{\text{base}} is the base sample count and λ\lambda is a scaling factor.

  1. Adaptive Imagery Reward (AIR): The reward function incorporates semantic distance as a soft re-weighting factor, incentivizing outputs that better align with the intended semantics. The reward for each candidate video is:

RAIR(x^0)=(α⋅MQ+β⋅TA+γ⋅VQ+ω⋅Rany)⋅Dˉsem(x^0)R_{\text{AIR}}(\hat{\mathbf{x}}_0) = (\alpha \cdot \mathrm{MQ} + \beta \cdot \mathrm{TA} + \gamma \cdot \mathrm{VQ} + \omega \cdot R_{any}) \cdot \bar{\mathcal{D}}_{\text{sem}}(\hat{\mathbf{x}}_0)

where MQ, TA, VQ are VideoAlign metrics, and RanyR_{any} is an extensible reward (e.g., VideoScore, VMBench). Figure 2

Figure 2: Overview of ImagerySearch; prompts are scored for semantic distance, guiding both candidate sampling and reward evaluation at key denoising steps.

The search is triggered at a limited set of denoising steps (the "Imagery Schedule"), focusing computational resources on pivotal stages where semantic correspondence is most efficiently captured. Figure 3

Figure 3: Visualization of attention at successive denoising steps; only key steps exhibit pronounced changes, justifying the Imagery Schedule.

LDT-Bench: Benchmarking Long-Distance Semantic Prompts

To rigorously evaluate generative models under imaginative scenarios, the paper introduces LDT-Bench, a benchmark specifically designed for long-distance semantic prompts. LDT-Bench comprises 2,839 prompts constructed by maximizing semantic distance across object–action and action–action pairs, derived from large-scale recognition datasets (ImageNet-1K, COCO, ActivityNet, UCF101, Kinetics-600).

Prompt generation leverages GPT-4o for fluency, with subsequent filtering by DeepSeek and human annotators. The benchmark includes an automated evaluation protocol, ImageryQA, which quantifies creative generation via:

  • ElementQA: Targeted questions about object and action presence.
  • AlignQA: Assessment of visual quality and aesthetics.
  • AnomalyQA: Detection of visual anomalies. Figure 4

    Figure 4: Construction and analysis of LDT-Bench; prompts cover a wide variety of categories and exhibit a semantic-distance distribution shifted toward longer ranges.

    Figure 5

    Figure 5: Analysis of LDT-Bench prompt suite; distributions and word clouds highlight the diversity and semantic span of prompts.

Experimental Results and Analysis

Quantitative and Qualitative Performance

ImagerySearch is evaluated on both LDT-Bench and VBench, using Wan2.1 as the backbone. It consistently outperforms general models (Wan2.1, Hunyuan, CogVideoX, Open-Sora) and TTS baselines (Video-T1, EvoSearch) in terms of imaging quality and semantic alignment, especially under long-distance semantic prompts. Figure 6

Figure 6: Qualitative comparison; ImagerySearch produces more vivid actions under long-distance semantic prompts than other methods.

Figure 7

Figure 7: (a) ImagerySearch maintains stable performance as semantic distance increases; (b-e) AIR delivers superior scaling behavior as inference-time computation increases.

ImagerySearch achieves an 8.83% improvement over the baseline on LDT-Bench and the highest average score on VBench, with pronounced gains in dynamic degree and subject consistency. Robustness analysis shows that ImagerySearch maintains nearly constant scores as semantic distance increases, while other methods exhibit greater variance and degradation. Figure 8

Figure 8: Error analysis; ImagerySearch attains the highest mean with the tightest spread on VBench scores for long-distance semantic prompts.

Ablation and Module Analysis

Ablation studies confirm the complementary benefits of SaDSS and AIR. Dynamic search space adjustment yields higher scores than static configurations, and the ImagerySearch search strategy outperforms alternatives such as Best-of-N and particle sampling. Reward-weight analysis demonstrates that dynamic adjustment achieves optimal performance across varying weights. Figure 9

Figure 9: Reward-weight analysis; MQ and VQ trends are stable, while TA varies, supporting dynamic adjustment for imaginative scenarios.

Qualitative Examples

Additional examples on LDT-Bench and VBench further illustrate ImagerySearch's capacity to generate coherent, contextually accurate videos for imaginative, long-distance prompts. Figure 10

Figure 10: More examples on LDT-Bench; frame sampling demonstrates vivid and coherent video generation.

Figure 11

Figure 11: More examples on VBench (Part I); ImagerySearch maintains quality across diverse prompts.

Figure 12

Figure 12: More examples on VBench (Part II); consistent performance on complex action–action scenarios.

Implications and Future Directions

The adaptive test-time search paradigm introduced by ImagerySearch demonstrates that semantic-distance-aware modulation of both search space and reward functions is critical for advancing video generation beyond the constraints of training data. The strong numerical results on LDT-Bench and VBench indicate that dynamic adaptation at inference can substantially improve model robustness and creativity, even without additional training.

The release of LDT-Bench provides a standardized testbed for evaluating imaginative video generation, enabling more rigorous benchmarking and facilitating future research on open-ended generative tasks. The modularity of the semantic scorer and reward function allows for integration with alternative encoders and metrics, supporting extensibility.

Future work may explore more flexible and context-sensitive reward mechanisms, integration with reinforcement learning-based fine-tuning, and scaling to longer video sequences and more complex compositional prompts. The approach also suggests broader applicability to other generative modalities (e.g., text-to-image, multimodal synthesis) where semantic distance and compositionality are key challenges.

Conclusion

ImagerySearch establishes a robust framework for adaptive test-time search in video generation, effectively addressing the limitations of semantic dependency and data scarcity in imaginative scenarios. By dynamically adjusting the inference search space and reward function according to prompt semantics, ImagerySearch achieves state-of-the-art results on both LDT-Bench and VBench, with especially strong gains for long-distance semantic prompts. The methodology and benchmark set a new standard for evaluating and improving creative generative models, with significant implications for future research in open-ended video synthesis and multimodal generation.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 6 tweets and received 56 likes.

Upgrade to Pro to view all of the tweets about this paper: