Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A System View of the Recognition and Interpretation of Observed Human Shape, Pose and Action (1503.08223v1)

Published 27 Mar 2015 in cs.CV

Abstract: There is physiological evidence that our ability to interpret human pose and action from 2D visual imagery (binocular or monocular) engages the circuitry of the motor cortices as well as the visual areas of the brain. This implies that the capability of the motor cortices to solve inverse kinematics is flexible enough to apply to both motion planning as well as serving as a generative model for the visual processing of human figures, despite the differing functional requirements of the two tasks. This paper provides a computational model of the cooperation between visual and motor areas: in other words, a system view of an important class of brain computations. The model unifies the solution of the separate inverse problems involved in the task, visual transformation discovery, inverse kinematics, and adaptation to morphology variations, using several instances of the Map-seeking Circuit algorithm. While the paper is weighted toward the exposition of a neurobiological hypothesis, from mathematical formalization of the problem to neuronal circuitry, the algorithmic expression of the solution is also a functional machine vision system for human figure recognition, and 3D pose and body morphology reconstruction from monocular, perspective-less input imagery. With an inverse kinematic generative model capable of imposing a variety of endogenous and exogenous constraints the machine vision implementation acquires characteristics currently unique among such systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. David W. Arathorn (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.