Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayman Habib is active.

Publication


Featured researches published by Ayman Habib.


Photogrammetric Engineering and Remote Sensing | 2005

Photogrammetric and Lidar Data Registration Using Linear Features

Ayman Habib; Mwafag Ghanma; Michel Morgan; Rami Al-Ruzouq

The enormous increase in the volume of datasets acquired by lidar systems is leading towards their extensive exploitation in a variety of applications, such as, surface reconstruction, city modeling, and generation of perspective views. Though being a fairly new technology, lidar has been influenced by and had a significant impact on photogrammetry. Such an influence or impact can be attributed to the complementary nature of the information provided by the two systems. For example, photogrammetric processing of imagery produces accurate information regarding object space break lines (discontinuities). On the other hand, lidar provides accurate information describing homogeneous physical surfaces. Hence, it proves logical to combine data from the two sensors to arrive at a more robust and complete reconstruction of 3D objects. This paper introduces alternative approaches for the registration of data captured by photogrammetric and lidar systems to a common reference frame. The first approach incorporates lidar features as control for establishing the datum in the photogrammetric bundle adjustment. The second approach starts by manipulating the photogrammetric imagery to produce a 3D model, including a set of linear features along object space discontinuities, relative to an arbitrarily chosen coordinate system. Afterwards, conjugate photogrammetric and lidar straight-line features are used to establish the transformation between the arbitrarily chosen photogrammetric coordinate system and the lidar reference frame. The second approach (bundle adjustment, followed by similarity transformation) is general enough to be applied for the co-registration of multiple three-dimensional datasets regardless of their origin (e.g., adjacent lidar strips, surfaces in GIS databases, and temporal elevation data). The registration procedure would allow for the identification of inconsistencies between the surfaces in question. Such inconsistencies might arise from changes taking place within the object space or inaccurate calibration of the internal characteristics of the lidar and the photogrammetric systems. Therefore, the proposed methodology is useful for change detection and system calibration applications. Experimental results from aerial and terrestrial datasets proved the feasibility of the suggested methodologies.


Photogrammetric Engineering and Remote Sensing | 2007

New Methodologies for True Orthophoto Generation

Ayman Habib; Eui-Myoung Kim; Changjae Kim

Orthophoto production aims at the elimination of sensor tilt and terrain relief effects from captured perspective imagery. Uniform scale and the absence of relief displacement in orthophotos make them an important component of GIS databases, where the user can directly determine geographic locations, measure distances, compute areas, and derive other useful information about the area in question. Differential rectification has been traditionally used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces serious artifacts in the form of double mapped areas at object space locations with sudden relief variations, e.g., in the vicinity of buildings. Such artifacts are removed through true orthophoto generation methodologies which are based on the identification of occluded portions of the object space in the involved imagery. Existing methodologies suffer from several problems such as their sensitivity to the sampling interval of the digital surface model (DSM) as it relates to the ground sampling distance (GSD) of the imaging sensor. Moreover, current methodologies rely on the availability of a digital building model (DBM), which requires an additional and expensive pre-processing. This paper presents new methodologies for true orthophoto generation while circumventing the problems associated with existing techniques. The feasibility and performance of the suggested techniques are verified through experimental results with simulated and real data.


Optical Engineering | 2003

Automatic calibration of low-cost digital cameras

Ayman Habib; Michel Morgan

Recent developments of digital cameras in terms of the size of charge-coupled device (CCD) arrays and reduced costs are leading to their applications in traditional as well as new photogrammetric, surveying, and mapping functions. Digital cameras, intended to replace conventional film-based mapping cameras, are becoming available along with many smaller formats capable of precise measurement applications. All such cameras require careful calibration to determine their metric characteristics, which are essential to carrying out photogrammetric activities. We introduce a new approach for incorporating straight lines in a bundle adjustment for calibrating off-the-shelf, low-cost digital cameras. The optimal configuration for successfully deriving the distortion parameters is considered when establishing the required test field. Moreover, a framework for automatic extraction of the straight lines in the images is presented and tested. The developed calibration procedure can be used as an efficient tool to investigate the most appropriate model that compensates for various distortions associated with the camera being calibrated. Experiments performed to compare line-based with traditional point-based self-calibration methods prove the feasibility of the suggested approach.


Photogrammetric Engineering and Remote Sensing | 2005

Stability Analysis and Geometric Calibration of Off-the-Shelf Digital Cameras

Ayman Habib; Michel Morgan

Recent developments of digital cameras in terms of the size of a Charged Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS) arrays, as well as reduced costs, are leading to their applications in traditional and new photogrammetric, surveying, and mapping functions. Such cameras require careful calibration to determine their metric characteristics, as defined by the Interior Orientation Parameters (IOP), which are essential for any photogrammetric activity. Moreover, the stability of the estimated IOP of these cameras over short and long time periods has to be analyzed and quantified. This paper outlines the incorporation of straight lines in a bundle adjustment procedure for calibrating off-the-shelf/low-cost digital cameras. A framework for automatic extraction of the straight lines in the images is also presented and tested. In addition, the research introduces new approaches for testing the camera stability, where the degree of similarity between reconstructed bundles using two sets of IOP is quantitatively evaluated. Experimental results with real data proved the feasibility of the line-based selfcalibration approach. Analysis of the estimated IOP from various calibration sessions over long time periods revealed the stability of the implemented camera.


Remote Sensing | 2010

Alternative Methodologies for LiDAR System Calibration

Ayman Habib; Ki-In Bang; Ana Paula Kersting; Jacky Chow

Abstract: Over the last few years, LiDAR has become a popular technology for the direct acquisition of topographic information. In spite of the increasing utilization of this technology in several applications, its accuracy potential has not been fully explored. Most of current LiDAR calibration techniques are based on empirical and proprietary procedures that demand the system’s raw measurements, which may not be always available to the end-user. As a result, we can still observe systematic discrepancies between conjugate surface elements in overlapping LiDAR strips. In this paper, two alternative calibration procedures that overcome the existing limitations are introduced. The first procedure, denoted as ―Simplified method‖, makes use of the LiDAR point cloud from parallel LiDAR strips acquired by a steady platform (e.g., fixed wing aircraft) over an area with moderately varying elevation. The second procedure, denoted as ―Quasi-rigorous method‖, can deal with non-parallel strips, but requires time-tagged LiDAR point cloud and navigation data (trajectory position only) acquired by a steady platform. With the widespread adoption of LAS format and easy access to trajectory information, this data requirement is not a problem. The proposed methods can be applied in any type of terrain coverage without the need for control surfaces and are relatively easy to implement. Therefore, they can be used in every flight mission if needed. Besides, the proposed procedures require minimal interaction from the user, which can be completely eliminated after minor extension of the suggested procedure.


Isprs Journal of Photogrammetry and Remote Sensing | 2001

Automatic relative orientation of large scale imagery over urban areas using Modified Iterated Hough Transform

Ayman Habib; Devin Kelley

The automation of relative orientation (RO) has been the major focus of the photogrammetric research community in the last decade. Despite the reported progress, there is no reliable (robust) approach that can perform automatic relative orientation (ARO) using large-scale imagery over urban areas. A reliable and general method for solving matching problems in various photogrammetric activities has been developed at The Ohio State University. This approach has been used to solve single photo resection using free-form linear features, surface matching and relative orientation. The approach estimates the parameters of a mathematical model relating the entities of two datasets when the correspondence of the involved entities is unknown. When applied to relative orientation, the coplanarity model is used to relate extracted edge pixels and/or feature points from a stereo-pair. In its execution, the relative orientation parameters are solved sequentially, using the coplanarity model to evaluate all possible pairings of the input primitives and choosing the most probable solution. As a result of this technique, the matched entities that correspond to the parameter solution are implicitly determined. Experiments using real data conclude that this is a robust method for relative orientation for both urban and rural scenes.


Photogrammetric Engineering and Remote Sensing | 2007

Comprehensive Analysis of Sensor Modeling Alternatives for High Resolution Imaging Satellites

Ayman Habib; Sung Woong Shin; Kyung-Ok Kim; Changjae Kim; Ki-In Bang; Eui-Myoung Kim; Dong-Cheon Lee

High-resolution imaging satellites are a valuable and cost effective data acquisition tool for a variety of mapping and GIS applications such as topographic mapping, map updating, orthophoto generation, environmental monitoring, and change detection. Sensor modeling that describes the mathematical relationship between corresponding scene and object coordinates is a prerequisite procedure prior to manipulating the acquired imagery from such systems for mapping purposes. Rigorous and approximate sensor models are the two alternatives for describing the mathematics of the involved imaging process. The former explicitly involves the internal and external characteristics of the imaging sensor to faithfully represent the geometry of the scene formation. On the other hand, approximate modeling can be divided into two categories. The first category simplifies the rigorous model after making some assumptions about the system’s trajectory and/or object space. Gupta and Hartley’s model, parallel projection, self-calibrating direct linear transformation, and modified parallel projection are examples of this category. Other approximate models are based on empirical formulation of the scene-to-ground mathematical relationship. This category includes among others, the well-known Rational Function Model (RFM). This paper addresses several aspects of sensor modeling. Namely, it deals with the expected accuracy from rigorous modeling of imaging satellites as it relates to the number of available ground control points, comparative analysis of approximate and rigorous sensor models, robustness of the reconstruction process against biases in the available sensor characteristics, and impact of incorporating multi-source imagery in a single triangulation mechanism. Following a brief theoretical background, these issues will be presented through experimental results from real datasets captured by satellite and aerial imaging platforms.


IEEE Transactions on Geoscience and Remote Sensing | 2010

Alternative Methodologies for the Internal Quality Control of Parallel LiDAR Strips

Ayman Habib; Ana Paula Kersting; Ki-In Bang; Dong-Cheon Lee

Light Detection and Ranging (LiDAR) systems have been widely adopted for the acquisition of dense and accurate topographic data over extended areas. Although the utilization of this technology has increased in different applications, the development of standard methodologies for the quality control (QC) of LiDAR data has not followed the same trend. In other words, a lack in reliable, practical, cost-effective, and commonly acceptable QC procedures is evident. A frequently adopted procedure for QC is comparing the LiDAR data to ground control points. Aside from being expensive, this approach is not accurate enough for the verification of horizontal accuracy, unless specifically designed LiDAR targets are used. This paper is dedicated to providing accurate, economical, and convenient internal QC procedures for the evaluation of LiDAR data, which is captured from parallel flight lines. The underlying concept of the proposed methodologies is that, in the absence of systematic and random errors in system parameters and measurements, conjugate surface elements in overlapping strips should perfectly match each other. Consistent incompatibilities and the quality of fit between conjugate surface elements in overlapping strips can be used to detect systematic errors in the system parameters/measurements and to evaluate the noise level in the LiDAR point cloud, respectively. Experimental results from real data demonstrate that all the proposed methods, with one exception, produce compatible estimates of systematic discrepancies between the involved data sets, as well as good quantification of inherent noise.


Photogrammetric Engineering and Remote Sensing | 2009

Error Budget of Lidar Systems and Quality Control of the Derived Data

Ayman Habib; Ki In Bang; Ana Paula Kersting; Dong-Cheon Lee

Lidar systems have been widely adopted for the acquisition of dense and accurate topographic data over extended areas. Although the utilization of this technology has increased in different applications, the development of standard methodologies for the quality assurance of lidar systems and quality control of the derived data has not followed the same trend. In other words, a lack of reliable, practical, cost-effective, and commonly-acceptable methods for quality evaluation is evident. A frequently adopted procedure for quality evaluation is the comparison between lidar data and ground control points. Besides being expensive, this approach is not accurate enough for the verification of the horizontal accuracy, which is known to be worse than the vertical accuracy. This paper is dedicated to providing an accurate, economical, and convenient quality control methodology for the evaluation of lidar data. The paper starts with a brief discussion of the lidar mathematical model, which is followed by an analysis of possible random and systematic errors and their impact on the resulting surface. Based on the discussion of error sources and their impact, a tool for evaluating the quality of the derived surface is proposed. In addition to the verification of the data quality, the proposed method can be used for evaluating the system parameters and measurements. Experimental results from simulated and real data demonstrate the feasibility of the proposed tool.


IEEE Geoscience and Remote Sensing Letters | 2007

Adjustment of Discrepancies Between LIDAR Data Strips Using Linear Features

Jaebin Lee; Kiyun Yu; Yong-Il Kim; Ayman Habib

Despite the recent developments in light detection and ranging systems, discrepancies between strips on overlapping areas persist due to the systematic errors. This letter presents an algorithm that can be used to detect and adjust such discrepancies. To achieve this, extracting conjugate features from the strips is a prerequisite step. In this letter, linear features are chosen as conjugate features because they can be accurately extracted from man-made structures in urban area and more easily extracted than the point features. Based on such a selection strategy, a simple and robust algorithm is proposed that is generally applicable for extracting such features. The algorithm includes methods that can be used to establish observation equations from similarity measurements of the extracted features. Then, several transformations are selected and used to adjust the strips. Following the transformation, the fitness of linear features is tested to determine whether the discrepancies have been resolved; the results are then evaluated statistically. The results demonstrate that the algorithm is effective in reducing the discrepancies between the strips.

Collaboration


Dive into the Ayman Habib's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Z. Lari

University of Calgary

View shared research outputs
Researchain Logo
Decentralizing Knowledge