Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text-Aware Image Restoration with Diffusion Models (2506.09993v1)

Published 11 Jun 2025 in cs.CV, cs.AI, and cs.LG

Abstract: Image restoration aims to recover degraded images. However, existing diffusion-based restoration methods, despite great success in natural image restoration, often struggle to faithfully reconstruct textual regions in degraded images. Those methods frequently generate plausible but incorrect text-like patterns, a phenomenon we refer to as text-image hallucination. In this paper, we introduce Text-Aware Image Restoration (TAIR), a novel restoration task that requires the simultaneous recovery of visual contents and textual fidelity. To tackle this task, we present SA-Text, a large-scale benchmark of 100K high-quality scene images densely annotated with diverse and complex text instances. Furthermore, we propose a multi-task diffusion framework, called TeReDiff, that integrates internal features from diffusion models into a text-spotting module, enabling both components to benefit from joint training. This allows for the extraction of rich text representations, which are utilized as prompts in subsequent denoising steps. Extensive experiments demonstrate that our approach consistently outperforms state-of-the-art restoration methods, achieving significant gains in text recognition accuracy. See our project page: https://cvlab-kaist.github.io/TAIR/

Text-Aware Image Restoration with Diffusion Models

The paper addresses a significant gap in the field of image restoration, particularly concerning the restoration of textual regions in images using diffusion models. Despite the advancement of generative modeling techniques, the accurate reconstruction of text is less explored, largely due to limitations in existing datasets and methodologies which result in text-image hallucination, where the generative models produce plausible yet incorrect text-like patterns.

Core Contributions

  1. Introduction of TAIR: The authors propose a novel restoration task called Text-Aware Image Restoration (TAIR) that focuses on simultaneously enhancing both visual content and textual fidelity. This approach diverges from traditional image restoration methodologies that primarily improve general perceptual quality without specifically accounting for textual legibility.
  2. SA-Text Benchmark: The paper introduces a comprehensive benchmark - SA-Text - consisting of 100K high-quality scene images annotated densely with diverse and complex text instances. This dataset serves as a foundational resource for the TAIR task, enabling rigorous evaluation and further research in text-conditioned image restoration.
  3. Multi-task Diffusion Framework: A new model, TeReDiff, integrates diffusion U-Net features with a text-spotting module. This multi-task framework allows rich text representations to be used as prompts in subsequent denoising steps, effectively enhancing text recognition accuracy alongside visual restoration.

Methodology and Technical Insights

The methodological approach is centered on leveraging diffusion models and text-spotting modules to decompress high-quality image features that accurately guide the restoration of text regions. By utilizing a degradation removal module and a text-spotting transformer, the framework learns to detect and rectify textual information embedded in complex visual environments.

  • Dataset Curation:

The authors devised an automated pipeline for curating SA-Text, combining text detection and recognition with vision-LLM verification to ensure high-resolution images paired with reliable text annotations. This addresses critical issues like low resolution and annotation inaccuracies prevalent in existing datasets.

  • Textual Prompting:

The TeReDiff model employs recognized text as prompts during the denoising steps, significantly improving the performance of both text detection and recognition systems compared to other contemporaneous diffusion-based methods.

Results and Implications

Extensive experiments demonstrate that TeReDiff consistently surpasses existing state-of-the-art image restoration methods in terms of both text recognition accuracy and image quality metrics across multiple degradation levels. The model shows notable efficacy in reducing the rate of incorrect text patterns and maintaining the integrity of text in challenging conditions.

These results imply significant practical applications, particularly in fields requiring document digitization, street sign recognition, and augmented reality navigation. The introduced framework sets a new direction for image restoration research, emphasizing the integration of semantic textual cues into visual reconstruction processes.

Future Prospects

The paper suggests potential avenues for future research, such as refining text-spotting capabilities in diffusion models further, expanding TAIR to accommodate even more diverse datasets, and exploring advanced prompting techniques. With ongoing advancements in AI and computer vision, the effective restoration of textual content alongside visual quality could radically improve automated interpretation of complex imagery.

This work contributes a vital perspective to the ongoing evolution of image restoration technology, particularly in the accurate and legible restoration of text, paving the path for improved text integration in visual reconstruction tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jaewon Min (6 papers)
  2. Jin Hyeon Kim (2 papers)
  3. Paul Hyunbin Cho (1 paper)
  4. Jaeeun Lee (1 paper)
  5. Jihye Park (10 papers)
  6. Minkyu Park (6 papers)
  7. Sangpil Kim (34 papers)
  8. Hyunhee Park (11 papers)
  9. Seungryong Kim (103 papers)