A computational model for dynamic visionThis paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.
Document ID
19910036504
Acquisition Source
Legacy CDMS
Document Type
Conference Paper
Authors
Moezzi, Saied (Michigan Univ. Ann Arbor, MI, United States)
Weymouth, Terry E. (Michigan, University Ann Arbor, United States)