Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taragay Oskiper is active.

Publication


Featured researches published by Taragay Oskiper.


computer vision and pattern recognition | 2007

Visual Odometry System Using Multiple Stereo Cameras and Inertial Measurement Unit

Taragay Oskiper; Zhiwei Zhu; Supun Samarasekera; Rakesh Kumar

Over the past decade, tremendous amount of research activity has focused around the problem of localization in GPS denied environments. Challenges with localization are highlighted in human wearable systems where the operator can freely move through both indoors and outdoors. In this paper, we present a robust method that addresses these challenges using a human wearable system with two pairs of backward and forward looking stereo cameras together with an inertial measurement unit (IMU). This algorithm can run in real-time with 15 Hz update rate on a dual-core 2 GHz laptop PC and it is designed to be a highly accurate local (relative) pose estimation mechanism acting as the front-end to a simultaneous localization and mapping (SLAM) type method capable of global corrections through landmark matching. Extensive tests of our prototype system so far, reveal that without any global landmark matching, we achieve between 0.5% and 1% accuracy in localizing a person over a 500 meter travel indoors and outdoors. To our knowledge, such performance results with a real time system have not been reported before.


computer vision and pattern recognition | 2008

Real-time global localization with a pre-built visual landmark database

Zhiwei Zhu; Taragay Oskiper; Supun Samarasekera; Rakesh Kumar; Harpreet S. Sawhney

In this paper, we study how to build a vision-based system for global localization with accuracies within 10 cm. for robots and humans operating both indoors and outdoors over wide areas covering many square kilometers. In particular, we study the parameters of building a landmark database rapidly and utilizing that database online for real-time accurate global localization. Although the accuracy of traditional short-term motion based visual odometry systems has improved significantly in recent years, these systems alone cannot solve the drift problem over large areas. Landmark based localization combined with visual odometry is a viable solution to the large scale localization problem. However, a systematic study of the specification and use of such a landmark database has not been undertaken. We propose techniques to build and optimize a landmark database systematically and efficiently using visual odometry. First, topology inference is utilized to find overlapping images in the database. Second, bundle adjustment is used to refine the accuracy of each 3D landmark. Finally, the database is optimized to balance the size of the database with achievable accuracy. Once the landmark database is obtained, a new real-time global localization methodology that works both indoors and outdoors is proposed. We present results of our study on both synthetic and real datasets that help us determine critical design parameters for the landmark database and the achievable accuracies of our proposed system.


international conference on computer vision | 2007

Ten-fold Improvement in Visual Odometry Using Landmark Matching

Zhiwei Zhu; Taragay Oskiper; Supun Samarasekera; Rakesh Kumar; Harpreet S. Sawhney

Our goal is to create a visual odometry system for robots and wearable systems such that localization accuracies of centimeters can be obtained for hundreds of meters of distance traveled. Existing systems have achieved approximately a 1% to 5% localization error rate whereas our proposed system achieves close to 0.1% error rate, a ten-fold reduction. Traditional visual odometry systems drift over time as the frame-to-frame errors accumulate. In this paper, we propose to improve visual odometry using visual landmarks in the scene. First, a dynamic local landmark tracking technique is proposed to track a set of local landmarks across image frames and select an optimal set of tracked local landmarks for pose computation. As a result, the error associated with each pose computation is minimized to reduce the drift significantly. Second, a global landmark based drift correction technique is proposed to recognize previously visited locations and use them to correct drift accumulated during motion. At each visited location along the route, a set of distinctive visual landmarks is automatically extracted and inserted into a landmark database dynamically. We integrate the landmark based approach into a navigation system with 2 stereo pairs and a low-cost inertial measurement unit (IMU) for increased robustness. We demonstrate that a real-time visual odometry system using local and global landmarks can precisely locate a user within 1 meter over 1000 meters in unknown indoor/outdoor environments with challenging situations such as climbing stairs, opening doors, moving foreground objects etc..


international symposium on mixed and augmented reality | 2012

Multi-sensor navigation algorithm using monocular camera, IMU and GPS for large scale augmented reality

Taragay Oskiper; Supun Samarasekera; Rakesh Kumar

Camera tracking system for augmented reality applications that can operate both indoors and outdoors is described. The system uses a monocular camera, a MEMS-type inertial measurement unit (IMU) with 3-axis gyroscopes and accelerometers, and GPS unit to accurately and robustly track the camera motion in 6 degrees of freedom (with correct scale) in arbitrary indoor or outdoor scenes. IMU and camera fusion is performed in a tightly coupled manner by an error-state extended Kalman filter (EKF) such that each visually tracked feature contributes as an individual measurement as opposed to the more traditional approaches where camera pose estimates are first extracted by means of feature tracking and then used as measurement updates in a filter framework. Robustness in feature tracking and hence in visual measurement generation is achieved by IMU aided feature matching and a two-point relative pose estimation method, to remove outliers from the raw feature point matches. Landmark matching to contain long-term drift in orientation via on the fly user generated geo-tiepoint mechanism is described.


ieee virtual reality conference | 2011

Stable vision-aided navigation for large-area augmented reality

Taragay Oskiper; Han-Pang Chiu; Zhiwei Zhu; Supun Samaresekera; Rakesh Kumar

In this paper, we present a unified approach for a drift-free and jitter-reduced vision-aided navigation system. This approach is based on an error-state Kalman filter algorithm using both relative (local) measurements obtained from image based motion estimation through visual odometry, and global measurements as a result of landmark matching through a pre-built visual landmark database. To improve the accuracy in pose estimation for augmented reality applications, we capture the 3D local reconstruction uncertainty of each landmark point as a covariance matrix and implicity rely more on closer points in the filter. We conduct a number of experiments aimed at evaluating different aspects of our Kalman filter framework, and show our approach can provide highly-accurate and stable pose both indoors and outdoors over large areas. The results demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the camera tracking problem in augmented reality applications.


international symposium on mixed and augmented reality | 2013

Augmented Reality binoculars

Taragay Oskiper; Mikhail Sizintsev; Vlad Branzoi; Supun Samarasekera; Rakesh Kumar

In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based on using two different cameras with wide field of view, and narrow field of view lenses enclosed in a binocular shaped shell. Using the wide field of view gives us context and enables us to recover the 3D location and orientation of the binoculars much more robustly, whereas the narrow field of view is used for the actual augmentation as well as to increase precision in tracking. We present our navigation algorithm that uses the two cameras in combination with an IMU and GPS in an Extended Kalman Filter (EKF) and provides jitter free, robust and real-time pose estimation for precise augmentation. We have demonstrated successful use of our system as part of a live simulated training system for observer training, in which fixed and rotary wing aircrafts, ground vehicles, and weapon effects are combined with real world scenes.


computer vision and pattern recognition | 2009

VideoTrek: A vision system for a tag-along robot

Oleg Naroditsky; Zhiwei Zhu; Aveek Das; Supun Samarasekera; Taragay Oskiper; Rakesh Kumar

We present a system that combines multiple visual navigation techniques to achieve GPS-denied, non-line-of-sight SLAM capability for heterogeneous platforms. Our approach builds on several layers of vision algorithms, including sparse frame-to-frame structure from motion (visual odometry), a Kalman filter for fusion with inertial measurement unit (IMU) data and a distributed visual landmark matching capability with geometric consistency verification. We apply these techniques to implement a tag-along robot, where a human operator leads the way and a robot autonomously follows. We show results for a real-time implementation of such a system with real field constraints on CPU power and network resources.


computer vision and pattern recognition | 2011

High-precision localization using visual landmarks fused with range data

Zhiwei Zhu; Han-Pang Chiu; Taragay Oskiper; Saad Ali; Raia Hadsell; Supun Samarasekera; Rakesh Kumar

Visual landmark matching with a pre-built landmark database is a popular technique for localization. Traditionally, landmark database was built with visual odometry system, and the 3D information of each visual landmark is reconstructed from video. Due to the drift of the visual odometry system, a global consistent landmark database is difficult to build, and the inaccuracy of each 3D landmark limits the performance of landmark matching. In this paper, we demonstrated that with the use of precise 3D Li-dar range data, we are able to build a global consistent database of high precision 3D visual landmarks, which improves the landmark matching accuracy dramatically. In order to further improve the accuracy and robustness, landmark matching is fused with a multi-stereo based visual odometry system to estimate the camera pose in two aspects. First, a local visual odometry trajectory based consistency check is performed to reject some bad landmark matchings or those with large errors, and then a kalman filtering is used to further smooth out some landmark matching errors. Finally, a disk-cache-mechanism is proposed to obtain the real-time performance when the size of the landmark grows for a large-scale area. A week-long real time live marine training experiments have demonstrated the high-precision and robustness of our proposed system.


intelligent robots and systems | 2010

Multi-modal sensor fusion algorithm for ubiquitous infrastructure-free localization in vision-impaired environments

Taragay Oskiper; Han-Pang Chiu; Zhiwei Zhu; Supun Samarasekera; Rakesh Kumar

In this paper, we present a unified approach for a camera tracking system based on an error-state Kalman filter algorithm. The filter uses relative (local) measurements obtained from image based motion estimation through visual odometry, as well as global measurements produced by landmark matching through a pre-built visual landmark database and range measurements obtained from radio frequency (RF) ranging radios. We show our results by using the camera poses output by our system to render views from a 3D graphical model built upon the same coordinate frame as the landmark database which also forms the global coordinate system and compare them to the actual video images. These results help demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the GPS denied ubiquitous camera tracking problem under both vision-aided and vision-impaired conditions.


Defense and Security Symposium | 2007

Precise visual navigation using multi-stereo vision and landmark matching

Zhiwei Zhu; Taragay Oskiper; Supun Samarasekera; Rakesh Kumar

Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1∼5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

Collaboration


Dive into the Taragay Oskiper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiwei Zhu

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge