Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast algorithm for Multiple-Circle detection on images using Learning Automata (1405.5531v1)

Published 21 May 2014 in cs.CV

Abstract: Hough transform (HT) has been the most common method for circle detection exhibiting robustness but adversely demanding a considerable computational load and large storage. Alternative approaches include heuristic methods that employ iterative optimization procedures for detecting multiple circles under the inconvenience that only one circle can be marked at each optimization cycle demanding a longer execution time. On the other hand, Learning Automata (LA) is a heuristic method to solve complex multi-modal optimization problems. Although LA converges to just one global minimum, the final probability distribution holds valuable information regarding other local minima which have emerged during the optimization process. The detection process is considered as a multi-modal optimization problem, allowing the detection of multiple circular shapes through only one optimization procedure. The algorithm uses a combination of three edge points as parameters to determine circles candidates. A reinforcement signal determines if such circle candidates are actually present at the image. Guided by the values of such reinforcement signal, the set of encoded candidate circles are evolved using the LA so that they can fit into actual circular shapes over the edge-only map of the image. The overall approach is a fast multiple-circle detector despite facing complicated conditions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Erik Cuevas (25 papers)
  2. Fernando Wario (4 papers)
  3. Valentin Osuna (3 papers)
  4. Daniel Zaldivar (20 papers)
  5. Marco Perez (15 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.