Papers
Topics
Authors
Recent
Search
2000 character limit reached

VASCAR: Content-Aware Layout Generation via Visual-Aware Self-Correction

Published 5 Dec 2024 in cs.CV | (2412.04237v3)

Abstract: LLMs have proven effective for layout generation due to their ability to produce structure-description languages, such as HTML or JSON. In this paper, we argue that while LLMs can perform reasonably well in certain cases, their intrinsic limitation of not being able to perceive images restricts their effectiveness in tasks requiring visual content, e.g., content-aware layout generation. Therefore, we explore whether large vision-LLMs (LVLMs) can be applied to content-aware layout generation. To this end, inspired by the iterative revision and heuristic evaluation workflow of designers, we propose the training-free Visual-Aware Self-Correction LAyout GeneRation (VASCAR). VASCAR enables LVLMs (e.g., GPT-4o and Gemini) iteratively refine their outputs with reference to rendered layout images, which are visualized as colored bounding boxes on poster background (i.e., canvas). Extensive experiments and user study demonstrate VASCAR's effectiveness, achieving state-of-the-art (SOTA) layout generation quality. Furthermore, the generalizability of VASCAR across GPT-4o and Gemini demonstrates its versatility.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.