Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hatem Alismail is active.

Publication


Featured researches published by Hatem Alismail.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2012

Automatic Calibration of a Range Sensor and Camera System

Hatem Alismail; L. Douglas Baker; Brett Browning

We propose an automated method to recover the full calibration parameters between a 3D range sensor and a monocular camera system. Our method is not only accurate and fully automated, but also relies on a simple calibration target consisting of a single circle. This allows the algorithm to be suitable for applications requiring in-situ calibration. We demonstrate the effectiveness of the algorithm on a camera-lidar system and show results on 3D mapping tasks.


international conference on robotics and automation | 2011

Monocular visual odometry for robot localization in LNG pipes

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

Regular inspection for corrosion of the pipes used in Liquified Natural Gas (LNG) processing facilities is critical for safety. We argue that a visual perception system equipped on a pipe crawling robot can improve on existing techniques (Magnetic Flux Leakage, radiography, ultrasound) by producing high resolution registered appearance maps of the internal surface. To achieve this capability, it is necessary to estimate the pose of sensors as the robot traverses the pipes. We have explored two monocular visual odometry algorithms (dense and sparse) that can be used to estimate sensor pose. Both algorithms use a single easily made measurement of the scene structure to resolve the monocular scale ambiguity in their visual odometry estimates. We have obtained pose estimates using these algorithms with image sequences captured from cameras mounted on different robots as they moved through two pipes having diameters of 152mm (6″) and 406mm (16″), and lengths of 6 and 4 meters respectively. Accurate pose estimates were obtained whose errors were consistently less than 1 percent for distance traveled down the pipe.


intelligent robots and systems | 2013

Pipe mapping with monocular fisheye imagery

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

We present a vision-based mapping and localization system for operations in pipes such as those found in Liquified Natural Gas (LNG) production. A forward facing fisheye camera mounted on a prototype robot collects imagery as it is teleoperated through a pipe network. The images are processed offline to estimate camera pose and sparse scene structure where the results can be used to generate 3D renderings of the pipe surface. The method extends state of the art visual odometry and mapping for fisheye systems to incorporate geometric constraints based on prior knowledge of the pipe components into a Sparse Bundle Adjustment framework. These constraints significantly reduce inaccuracies resulting from the limited spatial resolution of the fisheye imagery, limited image texture, and visual aliasing. Preliminary results are presented for datasets collected in our fiberglass pipe network which demonstrate the validity of the approach.


computer vision and pattern recognition | 2012

Online continuous stereo extrinsic parameter estimation

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

Stereo visual odometry and dense scene reconstruction depend critically on accurate calibration of the extrinsic (relative) stereo camera poses. We present an algorithm for continuous, online stereo extrinsic re-calibration operating only on sparse stereo correspondences on a per-frame basis. We obtain the 5 degree of freedom extrinsic pose for each frame, with a fixed baseline, making it possible to model time-dependent variations. The initial extrinsic estimates are found by minimizing epipolar errors, and are refined via a Kalman Filter (KF). Observation covariances are derived from the Cramer-Rao lower bound of the solution uncertainty. The algorithm operates at frame rate with unoptimized Matlab code with over 1000 correspondences per frame. We validate its performance using a variety of real stereo datasets and simulations.


international conference on robotics and automation | 2014

Continuous trajectory estimation for 3D SLAM from actuated lidar

Hatem Alismail; L. Douglas Baker; Brett Browning

We extend the Iterative Closest Point (ICP) algorithm to obtain a method for continuous-time trajectory estimation (CICP) suitable for SLAM from actuated lidar. Traditional solutions to SLAM from actuated lidar rely heavily on the accuracy of an auxiliary pose sensor to form rigid frames. These frames are then used with ICP to obtain accurate pose estimates. However, since lidar records a single range sample at time, any error in inter-sample sensor motion must be accounted for. This is not possible if the frame is treated as a rigid point cloud. In this work, instead of ICP we estimate a continuous-time trajectory that takes into account inter-sample pose errors. The trajectory is represented as a linear combination of basis functions and formulated as a solution to a (sparse) linear system without restrictive assumptions on sensor motion. We evaluate the algorithm on synthetic and real data and show improved accuracy in open-loop SLAM in comparison to state-of-the-art rigid registration methods.


intelligent robots and systems | 2011

Stereo visual odometry for pipe mapping

Peter Hansen; Hatem Alismail; Brett Browning; Peter Rander

Pipe inspection is a critical activity in gas production facilities and many other industries. In this paper, we contribute a stereo visual odometry system for creating high resolution, sub-millimeter maps of pipe surfaces. Such maps provide both 3D structure and appearance information that can be used for visualization, cross registration with other sensor data, inspection and corrosion detection tasks. We present a range of optical configuration and visual odometry techniques that we use to achieve high accuracy while minimizing specular reflections. We show empirical results from a range of datasets to demonstrate the performance of our approach.


The International Journal of Robotics Research | 2015

Visual mapping for natural gas pipe inspection

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

Validating the integrity of pipes is an important task for safe natural gas production and many other operations (e.g. refineries, sewers, etc.). Indeed, there is a growing industry of actuated, actively driven mobile robots that are used to inspect pipes. Many rely on a remote operator to inspect data from a fisheye camera to perform manual inspection and provide no localization or mapping capability. In this work, we introduce a visual odometry-based system using calibrated fisheye imagery and sparse structured lighting to produce high-resolution 3D textured surface models of the inner pipe wall. Our work extends state-of-the-art visual odometry and mapping for fisheye systems to incorporate weak geometric constraints based on prior knowledge of the pipe components into a sparse bundle adjustment framework. These constraints prove essential for obtaining high-accuracy solutions given the limited spatial resolution of the fisheye system and challenging raw imagery. We show that sub-millimeter resolution modeling is viable even in pipes which are 400 mm (16”) in diameter, and that sparse range measurements from a structured lighting solution can be used to avoid the inevitable monocular scale drift. Our results show that practical, high-accuracy pipe mapping from a single fisheye camera is within reach.


Journal of Field Robotics | 2015

Automatic Calibration of Spinning Actuated Lidar Internal Parameters

Hatem Alismail; Brett Browning

Actuated lidar, where a scanning lidar is combined with an actuation mechanism to scan a three-dimensional volume rather than a single line, has been used heavily in a wide variety of field robotics applications. Common examples of actuated lidar include spinning/rolling and nodding/pitching configurations. Due to the construction of actuated lidar, the center of rotation of the lidar mirror may not coincide with the center of rotation of the actuation mechanism. To triangulate a precise point cloud representation of the environment, the centers of rotation must be brought into alignment using a suitable calibration procedure. We refer to this problem as estimating the internal parameters of actuated lidar. In this work, we focus on spinning/rolling lidar and present a fully automated algorithm for calibration using generic scenes without the need for specialized calibration targets. The algorithm is evaluated on a range of real and synthetic data and is shown to be robust, accurate, and have a large basin of convergence.


international conference on 3d vision | 2016

Robust Tracking in Low Light and Sudden Illumination Changes

Hatem Alismail; Brett Browning; Simon Lucey

We present an algorithm for robust and real-time visual tracking under challenging illumination conditions characterized by poor lighting as well as sudden and drastic changes in illumination. Robustness is achieved by adapting illumination-invariant binary descriptors to dense image alignment using the Lucas and Kanade algorithm. The proposed adaptation preserves the Hamming distance under least-squares minimization, thus preserving the photometric invariance properties of binary descriptors. Due to the compactness of the descriptor, the algorithm runs in excess of 400 fps on laptops and 100 fps on mobile devices.


asian conference on computer vision | 2016

Photometric Bundle Adjustment for Vision-Based SLAM

Hatem Alismail; Brett Browning; Simon Lucey

We propose a novel algorithm for the joint refinement of structure and motion parameters from image data directly without relying on fixed and known correspondences. In contrast to traditional bundle adjustment (BA) where the optimal parameters are determined by minimizing the reprojection error using tracked features, the proposed algorithm relies on maximizing the photometric consistency and estimates the correspondences implicitly. Since the proposed algorithm does not require correspondences, its application is not limited to corner-like structure; any pixel with nonvanishing gradient could be used in the estimation process. Furthermore, we demonstrate the feasibility of refining the motion and structure parameters simultaneously using the photometric in unconstrained scenes and without requiring restrictive assumptions such as planarity. The proposed algorithm is evaluated on range of challenging outdoor datasets, and it is shown to improve upon the accuracy of the state-of-the-art VSLAM methods obtained using the minimization of the reprojection error using traditional BA as well as loop closure.

Collaboration


Dive into the Hatem Alismail's collaboration.

Top Co-Authors

Avatar

Brett Browning

Advanced Technologies Center

View shared research outputs
Top Co-Authors

Avatar

Peter Rander

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Peter Hansen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Simon Lucey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley Hall

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Daniel Nuffer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ermine A. Teves

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

L. Douglas Baker

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge