Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiwei Zhu is active.

Publication


Featured researches published by Zhiwei Zhu.


international conference on computer vision | 2007

Ten-fold Improvement in Visual Odometry Using Landmark Matching

Zhiwei Zhu; Taragay Oskiper; Supun Samarasekera; Rakesh Kumar; Harpreet S. Sawhney

Our goal is to create a visual odometry system for robots and wearable systems such that localization accuracies of centimeters can be obtained for hundreds of meters of distance traveled. Existing systems have achieved approximately a 1% to 5% localization error rate whereas our proposed system achieves close to 0.1% error rate, a ten-fold reduction. Traditional visual odometry systems drift over time as the frame-to-frame errors accumulate. In this paper, we propose to improve visual odometry using visual landmarks in the scene. First, a dynamic local landmark tracking technique is proposed to track a set of local landmarks across image frames and select an optimal set of tracked local landmarks for pose computation. As a result, the error associated with each pose computation is minimized to reduce the drift significantly. Second, a global landmark based drift correction technique is proposed to recognize previously visited locations and use them to correct drift accumulated during motion. At each visited location along the route, a set of distinctive visual landmarks is automatically extracted and inserted into a landmark database dynamically. We integrate the landmark based approach into a navigation system with 2 stereo pairs and a low-cost inertial measurement unit (IMU) for increased robustness. We demonstrate that a real-time visual odometry system using local and global landmarks can precisely locate a user within 1 meter over 1000 meters in unknown indoor/outdoor environments with challenging situations such as climbing stairs, opening doors, moving foreground objects etc..


ieee virtual reality conference | 2011

Stable vision-aided navigation for large-area augmented reality

Taragay Oskiper; Han-Pang Chiu; Zhiwei Zhu; Supun Samaresekera; Rakesh Kumar

In this paper, we present a unified approach for a drift-free and jitter-reduced vision-aided navigation system. This approach is based on an error-state Kalman filter algorithm using both relative (local) measurements obtained from image based motion estimation through visual odometry, and global measurements as a result of landmark matching through a pre-built visual landmark database. To improve the accuracy in pose estimation for augmented reality applications, we capture the 3D local reconstruction uncertainty of each landmark point as a covariance matrix and implicity rely more on closer points in the filter. We conduct a number of experiments aimed at evaluating different aspects of our Kalman filter framework, and show our approach can provide highly-accurate and stable pose both indoors and outdoors over large areas. The results demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the camera tracking problem in augmented reality applications.


international symposium on mixed and augmented reality | 2014

AR-mentor: Augmented reality based mentoring system

Zhiwei Zhu; Vlad Branzoi; Michael Wolverton; Glen Murray; Nicholas Vitovitch; Louise Yarnall; Girish Acharya; Supun Samarasekera; Rakesh Kumar

AR-Mentor is a wearable real time Augmented Reality (AR) mentoring system that is configured to assist in maintenance and repair tasks of complex machinery, such as vehicles, appliances, and industrial machinery. The system combines a wearable Optical-See-Through (OST) display device with high precision 6-Degree-Of-Freedom (DOF) pose tracking and a virtual personal assistant (VPA) with natural language, verbal conversational interaction, providing guidance to the user in the form of visual, audio and locational cues. The system is designed to be heads-up and hands-free allowing the user to freely move about the maintenance or training environment and receive globally aligned and context aware visual and audio instructions (animations, symbolic icons, text, multimedia content, speech). The user can interact with the system, ask questions and get clarifications and specific guidance for the task at hand. A pilot application with AR-Mentor was successfully built to instruct a novice to perform an advanced 33-step maintenance task on a training vehicle. The initial live training tests demonstrate that AR-Mentor is able to help and serve as an assistant to an instructor, freeing him/her to cover more students and to focus on higher-order teaching.


computer vision and pattern recognition | 2011

High-precision localization using visual landmarks fused with range data

Zhiwei Zhu; Han-Pang Chiu; Taragay Oskiper; Saad Ali; Raia Hadsell; Supun Samarasekera; Rakesh Kumar

Visual landmark matching with a pre-built landmark database is a popular technique for localization. Traditionally, landmark database was built with visual odometry system, and the 3D information of each visual landmark is reconstructed from video. Due to the drift of the visual odometry system, a global consistent landmark database is difficult to build, and the inaccuracy of each 3D landmark limits the performance of landmark matching. In this paper, we demonstrated that with the use of precise 3D Li-dar range data, we are able to build a global consistent database of high precision 3D visual landmarks, which improves the landmark matching accuracy dramatically. In order to further improve the accuracy and robustness, landmark matching is fused with a multi-stereo based visual odometry system to estimate the camera pose in two aspects. First, a local visual odometry trajectory based consistency check is performed to reject some bad landmark matchings or those with large errors, and then a kalman filtering is used to further smooth out some landmark matching errors. Finally, a disk-cache-mechanism is proposed to obtain the real-time performance when the size of the landmark grows for a large-scale area. A week-long real time live marine training experiments have demonstrated the high-precision and robustness of our proposed system.


intelligent robots and systems | 2010

Multi-modal sensor fusion algorithm for ubiquitous infrastructure-free localization in vision-impaired environments

Taragay Oskiper; Han-Pang Chiu; Zhiwei Zhu; Supun Samarasekera; Rakesh Kumar

In this paper, we present a unified approach for a camera tracking system based on an error-state Kalman filter algorithm. The filter uses relative (local) measurements obtained from image based motion estimation through visual odometry, as well as global measurements produced by landmark matching through a pre-built visual landmark database and range measurements obtained from radio frequency (RF) ranging radios. We show our results by using the camera poses output by our system to render views from a 3D graphical model built upon the same coordinate frame as the landmark database which also forms the global coordinate system and compare them to the actual video images. These results help demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the GPS denied ubiquitous camera tracking problem under both vision-aided and vision-impaired conditions.


british machine vision conference | 2014

Virtual Insertion: Robust Bundle Adjustment over Long Video Sequences.

Ziyan Wu; Zhiwei Zhu; Han-Pang Chiu

Our goal is to circumvent one of the roadblocks of using existing bundle adjustment algorithms for achieving satisfactory large-area structure from motion over long video sequences, namely, the need for sufficient visual features tracked across consecutive frames. We accomplish it by using a novel “virtual insertion” scheme, which constructs virtual points and virtual frames to adapt the existence of visual landmark link outage, namely “visual breaks” due to no common features observed from neighboring camera views in challenging environments. We show how to insert virtual point correspondences at each break position and its neighboring frames, by transforming initial motion estimations from non-vision sensors into 3D to 2D projection constraints of virtual scene landmarks. We also show how to add virtual frames to bridge the gap of nonoverlapping field of view (FOV) across sequential frames. Experiments are conducted on several real-world challenging video sequences, collected by multi-sensor based visual odometry systems. We demonstrate our proposed scheme significantly improves bundle adjustment performance in both drift correction and reconstruction accuracy.


Archive | 2009

SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT

Rakesh Kumar; Targay Oskiper; Oleg Naroditsky; Supun Samarasekera; Zhiwei Zhu; Janet Kim


Archive | 2007

Unified framework for precise vision-aided navigation

Supun Samarasekera; Rakesh Kumar; Taragay Oskiper; Zhiwei Zhu; Oleg Naroditsky; Harpreet S. Sawhney


Archive | 2010

Food recognition using visual analysis and speech recognition

Manika Puri; Zhiwei Zhu; Jeffrey Lubin; Tom Pschar; Ajay Divakaran; Harpreet S. Sawhney


Archive | 2012

Method and apparatus for mentoring via an augmented reality assistant

Rakesh Kumar; Supun Samarasekera; Girish Acharya; Michael Wolverton; Necip Fazil Ayan; Zhiwei Zhu; Ryan Villamil

Collaboration


Dive into the Zhiwei Zhu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge