Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
22 tokens/sec
GPT-5 High Premium
21 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
459 tokens/sec
Kimi K2 via Groq Premium
230 tokens/sec
2000 character limit reached

DCVNet: Dilated Cost Volume Networks for Fast Optical Flow (2103.17271v2)

Published 31 Mar 2021 in cs.CV

Abstract: The cost volume, capturing the similarity of possible correspondences across two input images, is a key ingredient in state-of-the-art optical flow approaches. When sampling correspondences to build the cost volume, a large neighborhood radius is required to deal with large displacements, introducing a significant computational burden. To address this, coarse-to-fine or recurrent processing of the cost volume is usually adopted, where correspondence sampling in a local neighborhood with a small radius suffices. In this paper, we propose an alternative by constructing cost volumes with different dilation factors to capture small and large displacements simultaneously. A U-Net with skip connections is employed to convert the dilated cost volumes into interpolation weights between all possible captured displacements to get the optical flow. Our proposed model DCVNet only needs to process the cost volume once in a simple feedforward manner and does not rely on the sequential processing strategy. DCVNet obtains comparable accuracy to existing approaches and achieves real-time inference (30 fps on a mid-end 1080ti GPU). The code and model weights are available at https://github.com/neu-vi/ezflow.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Learning optical flow from still images. In CVPR, 2021.
  2. Deep equilibrium optical flow estimation. In CVPR, 2022.
  3. A database and evaluation methodology for optical flow. IJCV, 92(1):1–31, 2011.
  4. Scopeflow: Dynamic scene scoping for optical flow. In CVPR, 2020.
  5. A naturalistic open source movie for optical flow evaluation. In Proc. ECCV, 2012.
  6. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell., 40(4):834–848, 2018.
  7. FlowNet: Learning optical flow with convolutional networks. In Proc. ICCV, 2015.
  8. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proc. CVPR, pages 3354–3361. IEEE, 2012.
  9. Deep residual learning for image recognition. In Proc. CVPR, 2016.
  10. Determining optical flow. Artificial Intelligence, 1981.
  11. A lightweight optical flow cnn - revisiting data fidelity and regularization. IEEE Trans. on Pattern Anal. and Mach. Intell., 2020.
  12. Liteflownet3: Resolving correspondence ambiguity for more accurate optical flow estimation. In ECCV, 2020.
  13. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proc. CVPR, 2018.
  14. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In Proc. CVPR, 2017.
  15. Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In Proc. ECCV, 2018.
  16. Imposing consistency for optical flow estimation. In CVPR, 2022.
  17. SENSE: A shared encoder network for scene-flow estimation. In ICCV, 2019.
  18. Learning optical flow from a few matches. In CVPR, 2021.
  19. Adam: A method for stochastic optimization. In Proc. ICLR, 2015.
  20. The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In CVPRW, 2016.
  21. Detail preserving residual feature pyramid modules for optical flow. In WACV, 2022.
  22. Devon: Deformable volume network for learning optical flow. In WACV, March 2020.
  23. Learning optical flow with kernel patch attention. In CVPR, 2022.
  24. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, 2016.
  25. Object scene flow for autonomous vehicles. In Proc. CVPR, pages 3061–3070, 2015.
  26. Object scene flow. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS), 2018.
  27. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
  28. Optical flow estimation using a spatial pyramid network. In Proc. CVPR, 2017.
  29. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
  30. Two-stream convolutional networks for action recognition in videos. In NeurIPS, 2014.
  31. Super-convergence: Very fast training of residual networks using large learning rates. CoRR, abs/1708.07120, 2017.
  32. CRAFT: cross-attentional flow transformer for robust optical flow. In CVPR, 2022.
  33. Autoflow: Learning a better training set for optical flow. In CVPR, 2021.
  34. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In CVPR, June 2018.
  35. RAFT: recurrent all-pairs field transforms for optical flow. In ECCV, 2020.
  36. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016.
  37. Displacement-invariant matching cost learning for accurate optical flow estimation. In NeurIPS, 2020.
  38. Learnable cost volume using the cayley representation. In ECCV, 2020.
  39. High-resolution optical flow from 1d attention and correlation. In ICCV, 2021.
  40. Gmflow: Learning optical flow via global matching. In CVPR, 2022.
  41. Volumetric correspondence networks for optical flow. In NeurIPS, 2019.
  42. Hierarchical discrete distribution decomposition for match density estimation. In CVPR, 2019.
  43. Separable flow: Learning motion cost volumes for optical flow estimation. In ICCV, 2021.
  44. Maskflownet: Asymmetric feature matching with learnable occlusion mask. In CVPR, 2020.
  45. Global matching with overlapping attention for optical flow estimation. In CoRR, 2022.
  46. DIP: deep inverse patchmatch for high-resolution optical flow. In CVPR, 2022.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.