Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adnan Ansar is active.

Publication


Featured researches published by Adnan Ansar.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Linear pose estimation from points or lines

Adnan Ansar; Konstantinos Daniilidis

Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications, we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We then analyze the sensitivity of our solutions to image noise and show that the sensitivity analysis can be used as a conservative predictor of error for our algorithms. We present a number of simulations which compare our results to two other recent linear algorithms, as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup.


IEEE Transactions on Robotics | 2009

Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing

Anastasios I. Mourikis; Nikolas Trawny; Stergios I. Roumeliotis; Andrew Edie Johnson; Adnan Ansar; Larry H. Matthies

In this paper, we present the vision-aided inertial navigation (VISINAV) algorithm that enables precision planetary landing. The vision front-end of the VISINAV system extracts 2-D-to-3-D correspondences between descent images and a surface map (mapped landmarks), as well as 2-D-to-2-D feature tracks through a sequence of descent images (opportunistic features). An extended Kalman filter (EKF) tightly integrates both types of visual feature observations with measurements from an inertial measurement unit. The filter computes accurate estimates of the landers terrain-relative position, attitude, and velocity, in a resource-adaptive and hence real-time capable fashion. In addition to the technical analysis of the algorithm, the paper presents validation results from a sounding-rocket test flight, showing estimation errors of only 0.16 m/s for velocity and 6.4 m for position at touchdown. These results vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.


intelligent robots and systems | 2003

Real-time detection of moving objects in a dynamic scene from moving robotic vehicles

Ashit Talukder; Steve B. Goldberg; Larry H. Matthies; Adnan Ansar

Dynamic scene perception is currently limited to detection of moving objects from a static platform or scenes with flat backgrounds. We discuss novel real-time methods to segment moving objects in the motion field formed by a moving camera/robotic platform with mostly translational motion. Our solution does not explicitly require any egomotion knowledge, thereby making the solution applicable to mobile outdoor robot problems where no IMU information is available. We address two problems in dynamic scene perception on the move, first using only 2D monocular grayscale images, and second where 3D range information from stereo is also available. Our solution involves real-time optical flow computations, followed by optical flow field preprocessing to highlight moving object boundaries. In the case where range data from stereo is computed, a 3D optical flow field is estimated by combining range information with 2D optical flow estimates, followed by a similar 3D flow field preprocessing step. A segmentation of the flow field using fast flood filling then identifies every moving object in the scene with a unique label. This novel algorithm is expected to be the critical first step in robust recognition of moving vehicles and people from mobile outdoor robots, and therefore offers a robust solution to the problem of dynamic scene perception in the presence of certain kinds of robot motion. It is envisioned that our algorithm will benefit robot scene perception in urban environments for scientific, commercial and defense applications. Results of our real-time algorithm on a mobile robot in a scene with a single moving vehicle are presented.


international symposium on 3d data processing visualization and transmission | 2004

Enhanced real-time stereo using bilateral filtering

Adnan Ansar; Andres Castano; Larry H. Matthies

In recent years, there have been significant strides in increasing quality of range from stereo using global techniques such as energy minimization. These methods cannot yet achieve real-time performance. However, the need to improve range quality for real-time applications persists. All real-time stereo implementations rely on a simple correlation step which employs some local similarity metric between the left and right image. Typically, the correlation takes place on an image pair modified in some way to compensate for photometric variations between the left and right cameras. Improvements and modifications to such algorithms tend to fall into one of two broad categories: those which address the correlation step itself (e.g., shiftable windows, adaptive windows) and those which address the preprocessing of input imagery (e.g. band-pass filtering, Rank, Census). Our efforts lie in the latter area. We present in this paper a modification of the standard band-pass filtering technique used by many SSD- and SAD-based correlation algorithms. By using the bilateral filter of Tomasi and Manduchi [(1998)], we minimize blurring at the filtering stage. We show that in conjunction with SAD correlation, our new method improves stereo quality at range discontinuities while maintaining real-time performance.


european conference on computer vision | 2002

Linear Pose Estimation from Points or Lines

Adnan Ansar; Konstantinos Daniilidis

Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup. We also present an analysis of the sensitivity of our algorithms to image noise.


AIAA Infotech@Aerospace 2007 Conference and Exhibit | 2007

A General Approach to Terrain Relative Navigation for Planetary Landing

Andrew Edie Johnson; Adnan Ansar; Larry H. Matthies; Nikolas Trawny; Anastasios I. Mourikis; Stergios I. Roumeliotis

We describe an algorithm for navigation state estimation during planetary descent to enable precision landing. The algorithm automatically produces 2D-to-3D correspondences between descent images and a surface map and 2D-to-2D correspondences through a sequence of descent images. These correspondences are combined with inertial measurements in an extended Kalman filter that estimates lander position, velocity and attitude as well as the time varying biases of the inertial measurements. The filter tightly couples inertial and camera measurements in a resource-adaptive and hence real-time capable fashion. Results from a sounding rocket test, covering the dynamic profile of typical planetary landing scenarios, show estimation errors of magnitude 0.16 m/s in velocity and 6.4 m in position at touchdown. These results vastly improve current state of the art and meet the requirements of future planetary exploration missions.


systems, man and cybernetics | 2005

Performance Analysis and Validation of a Stereo Vision System

Won S. Kim; Adnan Ansar; Robert D. Steele; Robert Steinke

This paper presents an in-depth performance analysis and validation of a correlation based stereo vision system being used as part of the ongoing 2003 Mars Exploration Rover flight mission. Our analysis includes the effects of correlation window size, pyramidal image down-sampling, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. A key element of validation is to determine the stereo localization error both analytically and experimentally. We study both down-range and cross-range error and verify that while camera calibration inaccuracy contributes to both, stereo correlation error affects only the former. Error contributions of subpixel interpolation, vertical misalignment, and foreshortening on stereo correlation are examined carefully. A novel method using bricks with reflective metrology targets and a mast-mounted stereo camera system enabled experimental measurements of the stereo disparity error. The standard deviation of the down-range disparity error was measured at sigma=0.32 pixel for high-resolution 1024times768 camera images. The result is critical in evaluating accurate rover navigation and instrument placement within given error budgets


international conference on advanced robotics | 2005

Rover mast calibration, exact camera pointing, and camera handoff for visual target tracking

Won S. Kim; Adnan Ansar; Robert D. Steele

This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments over 50 random target image points yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff


ACM Transactions on Intelligent Systems and Technology | 2012

Dynamic Landmarking for Surface Feature Identification and Change Detection

Kiri L. Wagstaff; Julian Panetta; Adnan Ansar; Ronald Greeley; Mary Pendleton Hoffer; Melissa Bunte; Norbert Schorghofer

Given the large volume of images being sent back from remote spacecraft, there is a need for automated analysis techniques that can quickly identify interesting features in those images. Feature identification in individual images and automated change detection in multiple images of the same target are valuable for scientific studies and can inform subsequent target selection. We introduce a new approach to orbital image analysis called dynamic landmarking. It focuses on the identification and comparison of visually salient features in images. We have evaluated this approach on images collected by five Mars orbiters. These evaluations were motivated by three scientific goals: to study fresh impact craters, dust devil tracks, and dark slope streaks on Mars. In the process we also detected a different kind of surface change that may indicate seasonally exposed bedforms. These experiences also point the way to how this approach could be used in an onboard setting to analyze and prioritize data as it is collected.


international symposium on visual computing | 2009

Robust Registration of Aerial Image Sequences

Clark F. Olson; Adnan Ansar; Curtis Padgett

We describe techniques for registering images from sequences of aerial images captured of the same terrain on different days. The techniques are robust to changes in weather, including variable lighting conditions, shadows, and sparse intervening clouds. The primary underlying technique is robust feature matching between images, which is performed using both robust template matching and SIFT-like feature matching. Outlier rejection is performed in multiple stages to remove incorrect matches. With the remaining matches, we can compute homographies between images or use non-linear optimization to update the external camera parameters. We give results on real aerial image sequences.

Collaboration


Dive into the Adnan Ansar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Larry H. Matthies

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Won S. Kim

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Curtis Padgett

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert D. Steele

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald Greeley

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Yang Cheng

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Edie Johnson

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge