Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahmad Kamal Aijazi is active.

Publication


Featured researches published by Ahmad Kamal Aijazi.


Remote Sensing | 2013

Segmentation Based Classification of 3D Urban Point Clouds: A Super-Voxel Based Approach with Evaluation

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.


Remote Sensing | 2013

Automatic Removal of Imperfections and Change Detection for Accurate 3D Urban Cartography by Classification and Incremental Updating

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

In this article, we present a new method of automatic 3D urban cartography in which different imperfections are progressively removed by incremental updating, exploiting the concept of multiple passages, using specialized functions. In the proposed method, the 3D point clouds are first classified into three main object classes: permanently static, temporarily static and mobile, using a new point matching technique. The temporarily static and mobile objects are then removed from the 3D point clouds, leaving behind a perforated 3D point cloud of the urban scene. These perforated 3D point clouds obtained from successive passages (in the same place) on different days and at different times are then matched together to complete the 3D urban landscape. The changes occurring in the urban landscape over this period of time are detected and analyzed using cognitive functions of similarity, and the resulting 3D cartography is progressively modified accordingly. The specialized functions introduced help to remove the different imperfections, due to occlusions, misclassifications and different changes occurring in the environment over time, thus ncreasing the robustness of the method. The results, evaluated on real data, demonstrate that not only is the resulting 3D cartography accurate, containing only the exact permanent features free from imperfections, but the method is also suitable for handling large urban scenes.


Journal of remote sensing | 2014

Automatic detection and feature estimation of windows in 3D urban point clouds exploiting façade symmetry and temporal correspondences

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

Due to the ever increasing demand for more realistic three-dimensional (3D) urban models coupled with recent advancements in ground-based light detection and ranging (lidar) technologies, recovering details of building façade structures, such as windows, has gained considerable attention. However, fewer laser points are usually available for windows as window frames occupy only small parts of building façades while window glass also offers limited reflectivity. This insufficient raw laser information makes it very difficult to detect and recover reliable geometry of windows without human interaction. So, in this article, we present a new method that automatically detects windows of different shapes in 3D lidar point clouds obtained from mobile terrestrial data acquisition systems in the urban environment. The proposed method first segments out 3D points belonging to the building façade from the 3D urban point cloud and then projects them onto a two-dimensional (2D) plane parallel to the building façade. After point inversion within a watertight boundary, windows are segmented out based on geometrical information. The window features/parameters are then estimated exploiting both symmetrically corresponding windows in the façade and temporally corresponding windows in successive passages, based on analysis of variance measurements. This unique fusion of information not only accommodates for lack of symmetry but also helps complete missing features due to occlusions. The estimated windows are then used to refine the 3D point cloud of the building façade. The results, evaluated on real data using different standard evaluation metrics, demonstrate not only the efficacy (with standard accuracy ) but also the technical edge of the proposed method.


field and service robotics | 2016

Segmentation and Classification of 3D Urban Point Clouds: Comparison and Combination of Two Approaches

Ahmad Kamal Aijazi; Andrés Serna; Beatriz Marcotegui; Paul Checchin; Laurent Trassoudaine

Segmentation and classification of 3D urban point clouds is a complex task, making it very difficult for any single method to overcome all the diverse challenges offered. This sometimes requires the combination of several techniques to obtain the desired results for different applications. This work presents and compares two different approaches for segmenting and classifying 3D urban point clouds. In the first approach, detection, segmentation and classification of urban objects from 3D point clouds, converted into elevation images, are performed by using mathematical morphology. First, the ground is segmented and objects are detected as discontinuities on the ground. Then, connected objects are segmented using a watershed approach. Finally, objects are classified using SVM (Support Vector Machine) with geometrical and contextual features. The second method employs a super-voxel based approach in which the 3D urban point cloud is first segmented into voxels and then converted into super-voxels. These are then clustered together using an efficient link-chain method to form objects. These segmented objects are then classified using local descriptors and geometrical features into basic object classes. Evaluated on a common dataset (real data), both these methods are thoroughly compared on three different levels: detection, segmentation and classification. After analyses, simple strategies are also presented to combine the two methods, exploiting their complementary strengths and weaknesses, to improve the overall segmentation and classification results.


Remote Sensing | 2017

Automatic Detection and Parameter Estimation of Trees for Forest Inventory Applications Using 3D Terrestrial LiDAR

Ahmad Kamal Aijazi; Paul Checchin; Laurent Malaterre; Laurent Trassoudaine

Forest inventory plays an important role in the management and planning of forests. In this study, we present a method for automatic detection and estimation of trees, especially in forest environments using 3D terrestrial LiDAR data. The proposed method does not rely on any predefined tree shape or model. It uses the vertical distribution of the 3D points partitioned in a gridded Digital Elevation Model (DEM) to extract out ground points. The cells of the DEM are then clustered together to form super-clusters representing potential tree objects. The 3D points contained in each of these super-clusters are then classified into trunk and vegetation classes using a super-voxel based segmentation method. Different attributes (such as diameter at breast height, basal area, height and volume) are then estimated at individual tree levels which are then aggregated to generate metrics for forest inventory applications. The method is validated and evaluated on three different data sets obtained from three different types of terrestrial sensors (vehicle-borne, handheld and static) to demonstrate its applicability and feasibility for a wide range of applications. The results are evaluated by comparing the estimated parameters with real field observations/measurements to demonstrate the efficacy of the proposed method. Overall segmentation and classification accuracies greater than 84 % while average parameter estimation error ranging from 1 . 6 to 9 % were observed.


field and service robotics | 2014

Super-Voxel Based Segmentation and Classification of 3D Urban Landscapes with Evaluation and Comparison

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

Classification of urban range data into different object classes offers several challenges due to certain properties of the data such as density variation, inconsistencies due to holes and the large data size which requires heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from Lidar sensors is presented. The 3D point cloud is first segmented into voxels which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metrics is presented which combines both segmentation and classification results simultaneously. The proposed method is evaluated on standard datasets using three different evaluation metrics.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2012

Handling Occlusions for Accurate 3D Urban Cartography: A New Approach Based on Characterization and Multiple Passages

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

In this paper we present a new occlusion handling technique which successfully addresses the intricate problem of extraction of occluded features for urban landscape analysis and cartography. This new method is based on temporal integration in which multiple sessions or passages are used to complete occluded features in a 3D cartographic image. 3D image obtained from each passage is first characterized and classified into three main object classes: Permanently static, Temporarily static and Mobile using inference based on basic reasoning and a new point matching technique, intelligently exploiting the different viewing angles of the mounted Lidar sensors. All the Temporarily static and Mobile objects, considered as occluding objects, are removed from the image/scene leaving behind a perforated 3D image of the cartography. This perforated image is then updated by similar subsequent perforated images, obtained on different days and hours of the day, filling in the holes and completing the missing features of the urban cartography. This ensures that the resulting 3D image of the cartography is most accurate containing only the exact and actual permanent features. Separate update and reset functions are specially added to increase robustness of the method. The proposed method is evaluated on a standard data set demonstrating its efficacy and suitability for different applications.


international conference on control and automation | 2017

Multi sensorial data fusion for efficient detection and tracking of road obstacles for inter-distance and anti-colision safety management

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

In this paper we present an automatic obstacle detection and tracking system for efficient inter-distance and anti-collision management that fuses both 3D LiDAR and 2D image data. The obstacles are first detected both in LiDAR scans and camera images and the data are then fused together. Even though LiDAR based detections are very accurate they are slower than image based detections. Hence, the proposed method helps in obtaining the state estimates more quickly with good accuracy. The unique fusion technique presented uses the detected objects geometrical information to extract the depth information at each image scan which is then corrected at each LiDAR scan. The results evaluated on real data demonstrate the prowess as well as the applicability of the proposed method which can be used for different vehicle safety applications.


ieee intelligent vehicles symposium | 2016

Automatic detection of vehicles at road intersections using a compact 3D Velodyne sensor mounted on traffic signals

Ahmad Kamal Aijazi; Paul Checchin; Laurent Malaterre; Laurent Trassoudaine

Real-time traffic monitoring can play an important role in efficient traffic management and increasing road capacity. In this paper, we present a new method for automatic detection of vehicles using a compact 3D Velodyne sensor mounted on traffic signals in the urban environment. Different aspects of the new Velodyne sensor are first studied and its data are characterized for its effective utilization for our application. The sensor is then mounted on top of a traffic signal to detect vehicles at road intersections. The 3D point cloud obtained from the sensor is first over-segmented into super-voxels and then objects are extracted using a Link-Chain method. The segmented objects are then detected/classified as vehicles or non-vehicles using geometrical models and local descriptors. The results evaluated on real data not only demonstrate the efficacy but also the suitability of the proposed solution for such traffic monitoring applications.


IAS | 2016

Automatic Detection and Feature Estimation of Windows from Mobile Terrestrial LiDAR Data

Ahmad Kamal Aijazi; Paul Checchin; Laurent Trassoudaine

This work presents a new method of automatic window detection in 3D LiDAR point clouds obtained from mobile terrestrial data acquisition systems in the urban environment. The proposed method first segments out 3D points belonging to the building facade from the 3D urban point cloud and then projects them onto a 2D plane parallel to the building facade. After point inversion within a watertight boundary, windows are segmented out based on geometrical information. The window features are then estimated exploiting both symmetrically corresponding windows in the facade as well as temporally corresponding windows in successive passages, based on ANOVA measurements. This unique fusion of information not only accommodates for lack of symmetry but also helps complete missing features due to occlusions. The estimated windows are then used to refine the 3D point cloud of the building facade. The results, evaluated on real data using different standard evaluation metrics, demonstrate the efficacy of the method.

Collaboration


Dive into the Ahmad Kamal Aijazi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Checchin

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar

Laurent Malaterre

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Malaterre

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

M. L. Tazir

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge