Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 172 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Riesz networks: scale invariant neural networks in a single forward pass (2305.04665v2)

Published 8 May 2023 in cs.CV and eess.IV

Abstract: Scale invariance of an algorithm refers to its ability to treat objects equally independently of their size. For neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, neural networks may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform which is a scale equivariant operation. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider detecting and segmenting cracks in tomographic images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths. An additional experiment is carried out on the MNIST Large Scale data set.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. doi:https://doi.org/10.1109/ICCV.1999.790410.
  2. doi:https://doi.org/10.1007/s10851-014-0541-0.
  3. doi:10.1109/CVPR.2008.4587581.
  4. arXiv:2210.05093.
  5. doi:10.1007/978-3-0348-0603-9_11. URL https://doi.org/10.1007/978-3-0348-0603-9_11
  6. doi:https://doi.org/10.1109/78.969520.
  7. doi:https://doi.org/10.1007/3-540-45404-7_17.
  8. doi:https://doi.org/10.1023/B:JMIV.0000026554.79537.35.
  9. doi:https://doi.org/10.1109/TIP.2009.2038832.
  10. doi:https://doi.org/10.1007/978-3-540-78157-8_35.
  11. doi:https://doi.org/10.5566/ias.1964.
  12. doi:https://doi.org/10.1109/ICIP.2010.5649275.
  13. doi:https://doi.org/10.1007/978-3-642-33454-2_64.
  14. doi:https://doi.org/10.1016/j.visres.2010.05.031.
  15. doi:https://doi.org/10.1142/S021969132040007X.
  16. doi:https://doi.org/10.1109/TIP.2009.2027628.
  17. doi:https://doi.org/10.1142/S0219691314500271.
  18. doi:https://doi.org/10.1016/j.media.2019.06.006.
  19. doi:https://doi.org/10.1007/978-3-319-46493-0_22.
  20. doi:10.1109/CVPR.2017.106. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2017.106
  21. arXiv:1411.6369.
  22. doi:10.1109/ICPR48806.2021.9412997.
  23. arXiv:1511.07122.
  24. doi:https://doi.org/10.1007/978-3-030-76657-3_35.
  25. doi:https://doi.org/10.1109/TPAMI.2012.230.
  26. doi:https://doi.org/10.1109/CVPR.2013.163.
  27. doi:https://doi.org/10.1109/5.726791.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.