Papers
Topics
Authors
Recent
2000 character limit reached

Breaking of brightness consistency in optical flow with a lightweight CNN network (2310.15655v2)

Published 24 Oct 2023 in cs.CV

Abstract: Sparse optical flow is widely used in various computer vision tasks, however assuming brightness consistency limits its performance in High Dynamic Range (HDR) environments. In this work, a lightweight network is used to extract illumination robust convolutional features and corners with strong invariance. Modifying the typical brightness consistency of the optical flow method to the convolutional feature consistency yields the light-robust hybrid optical flow method. The proposed network runs at 190 FPS on a commercial CPU because it uses only four convolutional layers to extract feature maps and score maps simultaneously. Since the shallow network is difficult to train directly, a deep network is designed to compute the reliability map that helps it. An end-to-end unsupervised training mode is used for both networks. To validate the proposed method, we compare corner repeatability and matching performance with origin optical flow under dynamic illumination. In addition, a more accurate visual inertial system is constructed by replacing the optical flow method in VINS-Mono. In a public HDR dataset, it reduces translation errors by 93\%. The code is publicly available at https://github.com/linyicheng1/LET-NET.

Citations (2)

Summary

  • The paper introduces a hybrid CNN method that breaks traditional brightness consistency assumptions for robust HDR optical flow estimation.
  • It employs a lightweight four-layer network with a shared encoder-decoder design, achieving up to 190 FPS on a commercial CPU.
  • The approach reduces translation errors by 93% on public HDR datasets and improves corner tracking under dynamic lighting conditions.

Breaking of Brightness Consistency in Optical Flow with a Lightweight CNN Network

The paper introduces a novel approach to optical flow estimation, aiming to improve performance in High Dynamic Range (HDR) conditions by challenging traditional brightness consistency assumptions. Recognizing the limitations of current optical flow methods, especially under dynamic lighting conditions, the authors propose a hybrid approach by leveraging convolutional neural networks (CNNs) to enhance illumination robustness in optical flow computation.

Methodological Innovations

  1. Hybrid Optical Flow Method: The proposed method deviates from traditional brightness consistency assumptions, instead employing a hybrid approach that combines convolutional feature consistency with traditional techniques. By utilizing a lightweight CNN with only four convolutional layers, the model extracts illumination-invariant features and score maps from images. This adaptation allows the method to achieve stable corner tracking even under dynamic lighting conditions.
  2. Network Design: The architecture consists of a shared encoder that converts the input image into feature maps, followed by a decoder that outputs a score map for corner extraction and a feature map for pyramidal optical flow tracking. This design focuses on efficiency, achieving an impressive speed of 190 frames per second on a commercial CPU.
  3. Training Strategy: Since shallow networks can be challenging to train, the authors propose a strategy involving a deep network to compute a reliability map that aids in training the shallow network. The training relies on an unsupervised end-to-end approach to extract local invariant features and refine corner detection through innovative loss functions, such as the mask Neural Reprojection Error (mNRE) and a newly introduced peak loss function.

Experimental Results and Discussion

The paper reports significant improvements in optical flow estimation in HDR scenarios, validated through a series of experiments. On a public HDR dataset, the method demonstrates a remarkable 93% reduction in translation errors when integrated into a visual-inertial navigation system (VINS-Mono). Furthermore, the corner repeatability and matching performance outperform traditional optical flow methods under varying illumination conditions. The proposed method also shows superior real-time performance metrics, demonstrating its efficiency and applicability in practical scenarios.

Implications and Future Work

The research highlights important implications for improving optical flow estimation in dynamic lighting environments. By breaking away from brightness consistency assumptions, the proposed method paves the way for more robust vision systems in real-time applications like autonomous navigation and augmented reality in HDR settings.

Future work may explore further refinements in network architecture to enhance performance while maintaining computational efficiency. Additionally, expanding the training strategy to incorporate diverse environmental conditions could further solidify the method's versatility. The integration of this approach with other state-of-the-art vision systems may yield new insights into developing more sophisticated and resilient optical flow estimation techniques.

Overall, the paper contributes a significant advancement in the field of computer vision by introducing a novel hybrid approach, serving as a foundation for ongoing research in robust optical flow estimation.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com