VASCAR: Content-Aware Layout Generation via Visual-Aware Self-Correction
Abstract: LLMs have proven effective for layout generation due to their ability to produce structure-description languages, such as HTML or JSON. In this paper, we argue that while LLMs can perform reasonably well in certain cases, their intrinsic limitation of not being able to perceive images restricts their effectiveness in tasks requiring visual content, e.g., content-aware layout generation. Therefore, we explore whether large vision-LLMs (LVLMs) can be applied to content-aware layout generation. To this end, inspired by the iterative revision and heuristic evaluation workflow of designers, we propose the training-free Visual-Aware Self-Correction LAyout GeneRation (VASCAR). VASCAR enables LVLMs (e.g., GPT-4o and Gemini) iteratively refine their outputs with reference to rendered layout images, which are visualized as colored bounding boxes on poster background (i.e., canvas). Extensive experiments and user study demonstrate VASCAR's effectiveness, achieving state-of-the-art (SOTA) layout generation quality. Furthermore, the generalizability of VASCAR across GPT-4o and Gemini demonstrates its versatility.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.