Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oleg Naroditsky is active.

Publication


Featured researches published by Oleg Naroditsky.


Journal of Field Robotics | 2006

Visual odometry for ground vehicle applications

David Nistér; Oleg Naroditsky; James R. Bergen

We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time.


Proceedings of the IEEE | 2006

Iris on the Move: Acquisition of Images for Iris Recognition in Less Constrained Environments

James R. Matey; Oleg Naroditsky; Keith J. Hanna; Raymond J. Kolczynski; Dominick Loiacono; Shakuntala Mangru; Michael Tinker; Thomas Zappia; Wenyi Y. Zhao

Iris recognition is one of the most powerful techniques for biometric identification ever developed. Commercial systems based on the algorithms developed by John Daugman have been available since 1995 and have been used in a variety of practical applications. However, all currently available systems impose substantial constraints on subject position and motion during the recognition process. These constraints are largely driven by the image acquisition process, rather than the particular pattern-matching algorithm used for the recognition process. In this paper we present results of our efforts to substantially reduce constraints on position and motion by means of a new image acquisition system based on high-resolution cameras, video synchronized strobed illumination, and specularity based image segmentation. We discuss the design tradeoffs we made in developing the system and the performance we have been able to achieve when the image acquisition system is combined with a standard iris recognition algorithm. The Iris on the Move (IOM) system is the first system to enable capture of iris images of sufficient quality for iris recognition while the subject is moving at a normal walking pace through a minimally confining portal


computer vision and pattern recognition | 2009

VideoTrek: A vision system for a tag-along robot

Oleg Naroditsky; Zhiwei Zhu; Aveek Das; Supun Samarasekera; Taragay Oskiper; Rakesh Kumar

We present a system that combines multiple visual navigation techniques to achieve GPS-denied, non-line-of-sight SLAM capability for heterogeneous platforms. Our approach builds on several layers of vision algorithms, including sparse frame-to-frame structure from motion (visual odometry), a Kalman filter for fusion with inertial measurement unit (IMU) data and a distributed visual landmark matching capability with geometric consistency verification. We apply these techniques to implement a tag-along robot, where a human operator leads the way and a robot autonomously follows. We show results for a real-time implementation of such a system with real field constraints on CPU power and network resources.


international conference on robotics and automation | 2010

Robust visual path following for heterogeneous mobile platforms

Aveek Das; Oleg Naroditsky; Zhiwei Zhu; Supun Samarasekera; Rakesh Kumar

We present an innovative path following system based upon multi-camera visual odometry and visual landmark matching. This technology enables reliable mobile robot navigation in real world scenarios including GPS-denied environments both indoors and outdoors. We recover paths in full 3D, making it applicable to both on and off-road ground vehicles. Our controller relies on pose updates from visual odometry, allowing us to achieve path following even when only a joystick drive interface to the base robot platform is available. We experimentally investigate two specific applications of our technology to autonomous navigation on ground vehicles - non line-of-sight leader-following (between heterogeneous platforms) and retro-traverse to home base. For safety and reliability we add dynamic short range obstacle detection and reactive avoidance capabilities to our controller. We show the results for end-to-end real time implementation of this technology using current off-the-shelf computing and network resources in challenging environments.


Proceedings of SPIE | 2009

Infrastructure free 6 DOF location and pose estimation for mixed reality systems

Rakesh Kumar; Supun Samarasekera; Taragay Oskiper; Zhiwei Zhu; Oleg Naroditsky; R. Villamil; J. G. Kim

Mixed reality training systems using Head Mounted Displays (HMDs) require very high precision knowledge of the 3D location and 3D orientation of the users head. This is required by the system to know where to insert the synthetic actors and objects in the HMD. The inserted objects must appear stable and not jitter or drift. Moreover latency of less than 5 milliseconds for pose estimation is required for lag-free see-through HMD operation. We describe how to achieve this performance using a multi-camera based visual navigation system mounted on the HMD. A Kalman filter is used to integrate high rate estimates from an IMU with a visual odometry system and to predict head motion. Landmark matching and GPS when available are used to correct any drifts.


international conference on pattern recognition | 2004

3D scanning using spatiotemporal orientation

Oleg Naroditsky; Kostas Daniilidis

We present a new approach to volumetric scene reconstruction which can produce accurate models from turntable image sequences. Instead of an epipolar plane image (EPI) volume, we consider a function on the 4D spatiotemporal volume valued with the intensity back projection of the camera (time) to the particular voxel. Using an optical flow technique we compute the local orientation of this spatiotemporal image and decide on occupancy based on the relative orientation between the viewing ray of the voxel at the particular time and the local image structure. Our method does not require a background compensation like the silhouette-based methods and is comparable in performance with space carving.


computer vision and pattern recognition | 2004

Visual odometry

David Nistér; Oleg Naroditsky; James R. Bergen


Archive | 2006

Method and apparatus for providing strobed image capture

Dominick Lolacono; James R. Matey; Oleg Naroditsky; Michael Tinker; Thomas Zappia


Archive | 2005

Method and apparatus for visual odometry

James R. Bergen; Oleg Naroditsky; David Nistér


Archive | 2006

Method and apparatus for designing iris biometric systems for use in minimally constrained settings

Robert Amantea; James R. Bergen; Dominick Loiacono; James R. Matey; Oleg Naroditsky; Michael Tinker; Thomas Zappia

Collaboration


Dive into the Oleg Naroditsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiwei Zhu

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge