Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer (1710.07084v3)

Published 19 Oct 2017 in cs.CV

Abstract: Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images but limits the ability of vision tasks. Different from existing methods which either ignore the wavelength dependency of the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater image from a new view. In this letter, we propose a weakly supervised color transfer method to correct color distortion, which relaxes the need of paired underwater images for training and allows for the underwater images unknown where were taken. Inspired by Cycle-Consistent Adversarial Networks, we design a multi-term loss function including adversarial loss, cycle consistency loss, and SSIM (Structural Similarity Index Measure) loss, which allows the content and structure of the corrected result the same as the input, but the color as if the image was taken without the water. Experiments on underwater images captured under diverse scenes show that our method produces visually pleasing results, even outperforms the art-of-the-state methods. Besides, our method can improve the performance of vision tasks.

Citations (399)

Summary

  • The paper introduces a weakly supervised CycleGAN framework for translating underwater images to air-like images without paired datasets.
  • It combines adversarial, cycle consistency, and SSIM losses to preserve image structure while correcting color distortions.
  • Experimental results show superior visual quality and improved performance on tasks like saliency detection compared to state-of-the-art methods.

Underwater Image Color Correction Using Weakly Supervised Color Transfer

The paper "Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer" by Chongyi Li, Jichang Guo, and Chunle Guo presents a novel approach to addressing the issue of color distortion in underwater images. Underwater imaging is challenging due to problems such as selective attenuation, scattering, and wavelength-dependent light absorption, which significantly degrade image quality and impede computer vision tasks.

The authors propose a weakly supervised learning model to perform color correction in underwater images. The model relies on Cycle-Consistent Adversarial Networks (CycleGANs) and avoids the need for paired underwater-air image datasets during training. Instead, the model learns a cross-domain mapping function between underwater images (the source domain) and air images (the target domain), preserving the content and structure of the input while refining their color appearance.

Methodology

The approach is grounded on recent advancements in image-to-image translation networks and integrates adversarial loss, cycle consistency loss, and Structural Similarity Index Measure (SSIM) loss in its multi-term loss function. These components work synergistically to ensure the translated image maintains the structural integrity of the input while appearing as if captured outside the aquatic environment. Critically, the model does not require the explicit pair labels, thus providing the flexibility to handle images taken in unknown underwater locations.

The network's architecture includes two generative mapping functions, G:XYG: X \rightarrow Y and F:YXF: Y \rightarrow X, which translate between the domains of underwater and air images. Additionally, two discriminators, DXD_X and DYD_Y, differentiate between real and generated images in their respective domains. The loss functions are minimized using adversarial training, thus stabilizing the model through least square loss.

Experiments and Results

The proposed model was tested against several state-of-the-art methods, including CycleGAN, Gray World, image enhancement techniques, and traditional underwater image restoration approaches. The experiments demonstrated that the presented model achieves visually superior results by effectively removing the bluish and greenish tones typical in underwater images, outperforming other methods.

The authors conducted a user paper to objectively evaluate the model's performance, wherein trained evaluators assessed the visual fidelity of the corrected images. The results confirmed the subjective superiority of the proposed method over existing alternatives. Furthermore, applications in saliency detection and keypoint matching were tested, revealing improved performance on vision tasks after color correction.

Implications and Future Directions

The proposed approach is significant as it offers a robust framework for underwater image enhancement without the need for comprehensive and annotated datasets, presenting a practical solution in real-world scenarios where data collection is constrained. The work could inform the design of more sophisticated underwater imaging systems, enhancing capabilities in areas such as marine biology, underwater robotics, and autonomous navigation.

Moreover, this research opens new avenues for further exploration of weakly supervised learning techniques in other image processing domains, particularly in conditions where data pairing is impractical. Future developments might see extensions into real-time implementations on onboard systems for autonomous underwater vehicles, where real-time data processing correlates with mission success.

In summary, by advancing the weak supervision paradigm and integrating multi-term loss constructs, the paper introduces an effective solution to the ubiquitous problem of underwater image degradation, setting a foundation for future research and practical applications in underwater computer vision.