Mostapha Harb
University of Pavia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mostapha Harb.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016
Mostapha Harb; Paolo Gamba; Fabio Dell'Acqua
The presence of clouds and their shadows is an obvious problem for maps obtained from multispectral images. As a matter of fact, clouds and their shadows create occluded and obscured areas, hence information gaps that need to be filled. The usual approach-pixel substitution-requires first to recognize the cloud/shadow pixels. This work presents a cloud/shadow delineation algorithm, the cloud/shadow delineation tool (CSDT) designed for Landsat and CBERS medium resolution multispectral data. The algorithm uses a set of literature indices, as well as a set of mathematical operations on the spectral bands, in order to enhance the visibility of the cloud/shadow objects. The performance of CSDT was tested on a set of scenes from the Landsat and CBERS catalogues. The obtained results showed more accurate and stable performance on Landsat data. In order to validate the proposed approach, this work presents also a comparison with the F-mask algorithm on Landsat scenes. Results show that the F-mask technique tends to overestimate the cloud cover, while CSDT slightly underestimates it. However, accuracy measures show a significantly better performance of the proposed method than the F-mask algorithm in our investigation.
IEEE Geoscience and Remote Sensing Magazine | 2017
Mostapha Harb; Fabio Dell'Acqua
Scientific literature reports several possible ways for remote sensing (RS) to contribute to risk assessment for natural disasters, not only from a theoretical perspective but also in concrete applications. However, the typical RS scientists approach to risk assessment has so far reflected one of the main limitations of the general risk-assessment process where several natural disasters are concerned. That is, to avoid facing the sometimes unmanageable complexities arising from interhazard or vulnerability dependencies, risk-assessment activities tend to focus on one hazard at a time, sometimes leaving dangerous gaps in understanding the real risk for a community or an economic system. Given the current trend in the risk-assessment community to move from a sum of hazards to a multihazard approach, this article builds on previous scientific literature to bring the same perspective to RS. The importance of the subject is supported and explained, a comprehensive review of the existing multirisk assessment approaches is provided, and tangible contributions of space-based Earth observation are highlighted in the different phases of the disaster-management cycle. Different strategies are discussed, and a specific example is presented in depth as one of the most promising approaches.
urban remote sensing joint event | 2015
Mostapha Harb; D. De Vecchi; Fabio Dell'Acqua
The rapid expansion of human activities in time and space is tangible in different parts of the world. Urban sprawl is one of the phenomena that need to be controlled in order to ensure a sustainable development of communities. Satellite remote sensing provides a repository of Earth observations - and thus information on surface changes - since the early seventies. This research proposes a hybrid method applied to Landsat acquisitions for highlighting built-up areas and thus their changes, to estimate their age. The results as are also useful products in risk assessment and urban planning.
international geoscience and remote sensing symposium | 2015
Mostapha Harb; Daniele De Vecchi; Paolo Gamba; Fabio Dell'Acqua; Raul Queiroz Feitosa
Satellite acquisitions from LANDSAT (LS) and CBERS programs are widely used in monitoring land cover dynamics. In the acquired products, clouds form opaque objects are obscuring parts of the scene and preventing a reliable extraction of information from these areas. Consequently, cloud shadows create similar problems, as the reflected intensity of the shadowed areas is highly reduced, generating additional info gaps. The problem can be handled by replacing clouds/shadows pixels from other close-date acquisitions, but that would assume a prior knowledge of the spatial distribution of clouds and their corresponding shadows in a scene. This research introduces a method that provides the clouds/shadows layers and their percentage in LS (TM & ETM+) and CBERS (HRCC) scenes. The approach relies on a set of literature indicators to create a composite image that enhances the visual differentiation of clouds/shadows from other objects. The created composite RGB are then warped to a relative luminance raster calculated from the linear bands components. Afterwards, the raster is processed by a K-means unsupervised classifier with a definite number of classes in order to isolate the target-layer pixels. Next, the statistical mode for the population of each class is calculated, compared and used to select the cloud/shadow class automatically, and finally the results are refined by a set of morphological filters. The processing chain avoids the usage of thresholds and highly reduces the user intervention. The achieved outcomes on various test cases are promising and stable, and encourage further developments.
2014 conference on Big Data from Space (BiDS’14) | 2014
D. De Vecchi; Mostapha Harb; Fabio Dell'Acqua
This paper presents a Near-Real-Time multi-GPU accelerated solution of the ωk Algorithm for Synthetic Aperture Radar (SAR) data focusing, obtained in Stripmap SAR mode. Starting from an input raw data, the algorithm subdivides it in a grid of a configurable number of bursts along track. A multithreading CPU-side support is made available in order to handle each graphic device in parallel. Then each burst is assigned to a separate GPU and processed including Range Compression, Stolt Mapping via ChirpZ and Azimuth Compression steps. We prove the efficiency of our algorithm by using Sentinel-1 raw data (approx. 3.3 GB) on a commodity graphics card; the single-GPU solution is approximately 4x faster than the industrial multi-core CPU implementation (General ACS SAR Processor, GASP), without significant loss of quality. Using a multi-GPU system, the algorithm is approximately 6x faster with respect to the CPU processor.For decades, field help in case of disasters on the Earth’s surface - like floods, fires or earthquakes - is supported by the analysis of remotely sensed data. In recent years, the monitoring of vehicles, buildings or areas fraught with risk has become another major task for satellite-based crisis intervention. Since these scenarios are unforeseen and time-critical, they require a fast and well coordinated reaction. If useful information is extracted out of image data in realtime directly on board a spacecraft, the timespan between image acquisition and an appropriate reaction can be shortened significantly. Furthermore, on board image analysis allows data of minor interest, e.g. cloud-contaminated scenes, to be discarded and/or treated with lower priority, which leads to an optimized usage of storage and downlink capacity. This paper describes the modular application framework of VIMOS, an on board image processing experiment for remote sensing applications. Special focus will be on resource management, safety and modular commandability.Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated time series (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of time series variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.The seamless mosaicing of massive very high resolution imagery addresses several aspects related to big data from space. Data volume is directly proportional to the size the input data, i.e., order of several TeraPixels for a continent. Data velocity derives from the fact that the input data is delivered over several years to meet maximum cloud contamination constraints with the considered satellites. Data variety results from the need to collect and integrate various ancillary data for cloud detection, land/sea mask delineation, and adaptive colour balancing. This paper details how these 3 aspects of big data are handled and illustrates them for the creation of a seamless pan-European mosaic from 2.5m imagery (Land Monitoring/Urban Atlas Copernicus CORE 03 data set).The current development of satellite imagery means that a great volume of images acquired globally has to be understood in a fast and precise manner. Processing this large quantity of information comes at the cost of finding unsupervised algorithms to fulfill these tasks. Change detection is one of the main issues when talking about the analysis of satellite image time series (SITS). In this paper, we propose a method to analyze changes in SITS based on binary descriptors and on the Hamming distance, regarded as a similarity metric. In order to render an automatic and completely unsupervised technique towards solving this problem, the obtained distances are quantized into change levels using the Lloyd-Max’s algorithm. The experiments are carried on 11 Landsat images at 30 meters spatial resolution, covering an area of approximately 59 × 51 km2 over the surroundings of Bucharest, Romania, and containing information from six subbands of frequency.The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science Data Centres in at least eight countries. There are strict requirements both on data quality control and traceability of the data processing. Data volumes will be greater than 10 Pbyte, with the actual volume being dependent on the amount of reprocessing required.In the space domain, all scientific and technological developments are accompanied by a growth of the number of data sources. More specifically, the world of observation knows this very strong acceleration and the demand for information processing follows the same pace. To meet this demand, the problems associated with non-interoperability of data must be efficiently resolved upstream and without loss of information. We advocate the use of linked data technologies to integrate heterogeneous and schema-less data that we aim to publish in the 5 stars scale in order to foster their re-use. By proposing the 5 stars data model, Tim Berners-Lee drew the perfect roadmap for the production of high quality linked data. In this paper, we present a technological framework that allows to go from raw, scattered and heterogeneous data to structured data with a well-defined and agreed upon semantics, interlinked with other dataset for their common objects.Reference data sets, necessary to the advancement of the field of object recognition by providing a point of comparison for different algorithms, are prevalent in the field of multimedia. Although sharing the same basic object recognition problem, in the field of remote sensing there is a need for specialized reference data sets. This paper would like to open the topic for discussion, by taking a first attempt at creating a reference data set for a satellite image. In doing so, important differences between annotating photographic and satellite images are highlighted, along with their impact on the creation of a reference data set. The results are discussed with a view toward creating a future methodology for the manual annotation of satellite images.The future atmospheric composition Sentinel missions will generate two orders of magnitude more data than the current missions and the operational processing of these big data is a big challenge. The trace gas retrieval from remote sensing data usually requires high-performance radiative transfer model (RTM) simulations and the RTM are usually the bottleneck for the operational processing of the satellite data. To date, multi-core CPUs and also Graphical Processing Units (GPUs) have been used for highly intensive parallel computations. In this paper, we are comparing multi-core and GPU implementations of an RTM based on the discrete ordinate solution method. With GPUs, we have achieved a 20x-40x speed-up for the multi-stream RTM, and 50x speed-up for the two-stream RTM with respect to the original single-threaded CPU codes. Based on these performance tests, an optimal workload distribution scheme between GPU and CPU is proposed. Finally, we discuss the performance obtained with the multi-core-CPU and GPU implementations of the RTM.The effective use of Big Data in current and future scientific missions requires intelligent data handling systems which are able to interface the user to complicated distributed data collections. We review the WISE Concept of Scientific Information Systems and the WISE solutions for the storage and processing as applied to Big Data.Interactive visual data mining, where the user plays a key role in learning process, has gained high attention in data mining and human-machine communication. However, this approach needs Dimensionality Reduction (DR) techniques to visualize image collections. Although the main focus of DR techniques lays on preserving the structure of the data, the occlusion of images and inefficient usage of display space are their two main drawbacks. In this work, we propose to use Non-negative Matrix Factorization (NMF) to reduce the dimensionality of images for immersive visualization. The proposed method aims to preserve the structure of data and at the same time reduce the occlusion between images by defining regularization terms for NMF. Experimental validations performed on two sets of image collections show the efficiency of the proposed method in respect to controlling the trade-off between structure preserving and less occluded visualization.This article provides a short overview about the TanDEM-X mission, its objectives and the payload ground segment (PGS) based on data management, processing systems and long term archive. Due to the large data volume of the acquired and processed products a main challenge in the operation of the PGS is to handle the required data throughput, which is a new dimension for the DLR PGS. To achieve this requirement, several solutions were developed and coordinated. Some of them were more technical nature whereas others optimized the workflows.Clustering of Earth Observation (EO) images has gained a high amount of attention in remote sensing and data mining. Here, each image is represented by a high-dimensional feature vector which could be computed as the results of coding algorithms of extracted local descriptors or raw pixel values. In this work, we propose to learn the features using discriminative Nonnegative Matrix factorization (DNMF) to represent each image. Here, we use the label of some images to produce new representation of images with more discriminative property. To validate our algorithm, we apply the proposed algorithm on a dataset of Synthetic Aperture Radar (SAR) and compare the results with the results of state-of-the-art techniques for image representation. The results confirm the capability of the proposed method in learning discriminative features leading to higher accuracy in clustering.
urban remote sensing joint event | 2015
Daniele De Vecchi; Mostapha Harb; Gianni Cristian Iannelli; Paolo Gamba; Fabio Dell'Acqua; Raul Queiroz Feitosa
The open data policy, the availability of high resolution imagery and the capability to cover fast-growing economies are among the main advantages of CBERS. Unfortunately, data produced by this satellite suffer of geographic misplacement, forcing to apply pre-processing techniques to stabilize the imagery. This paper introduces a feature-based technique developed to pre-process CBERS imagery over an area of interest. In particular, the algorithm is able to fix the shift among HRC high resolution (2.5 meters) and CCD medium resolution (20 meters). The final goal is to combine the advantages of high resolution and radiometric properties for built-up area extraction purposes.
international geoscience and remote sensing symposium | 2015
Daniele De Vecchi; Mostapha Harb; Fabio Dell'Acqua
Urban expansion monitoring and organization can be performed through space-based observation thanks to the revisit time and level of details guaranteed by satellite remote sensing. In particular, the Landsat mission products are the most used thanks to the long time coverage and open access policy. This paper proposed a hybrid method - combination of pixel- and object-based analysis - in order to automatically extract built-up areas from Landsat imagery. Segments are delineated from spectral indices computed in order to increase the spectral distance among the different land cover classes. The principal component analysis is applied to the original bands and constitutes the pixel-based side of the method. Segments and PCA are then combined and classified using an unsupervised approach. Results of the method were quite satisfying with an average Kappa value over 0.5 in both case studies.
international geoscience and remote sensing symposium | 2015
Daniele De Vecchi; Daniel Aurelio Galeazzo; Mostapha Harb; Fabio Dell'Acqua
Change detection is by definition the capability to detect and highlight changes occurring in space and time. Earth Observation satellites represent a fundamental source of information thanks to repeatability in time and spatial resolution. In this paper, we propose an unsupervised change detection technique capable of processing a series of single-date built-up area extractions with two main goals: determining the age of different parts of an urban area and fixing errors due to the automatic extractions suggested in previous papers by our group. Results show a general stabilization of the Kappa value but further investigation is still necessary. The proposed algorithm is available to the general public as a part of a QGIS plugin named SENSUM Earth Observation (EO) tools.
international geoscience and remote sensing symposium | 2014
Gholam Reza Dini; Gianni Lisini; Mostapha Harb; Paolo Gamba
In this paper, two different approaches are proposed for the estimation of building footprints and number of stories using high resolution space-borne images. To this aim, semiglobal matching (SGM) is used to generate normalized digital surface models (nDSM) from stereo pairs. Alternatively, a height-from-shadow approach (called “shadow-raiser”) is implemented by detecting building rooftops and related shadow regions. Using associated lengths of shadow, the building heights are computed based on sun elevation and azimuth. The results of the proposed algorithms using IKONOS and GeoEye images demonstrate promising results with SGM, although the building dimensions are usually overestimated. In contrast, shadow-raiser delivers good results only if the building-shadow pair is correctly detected. Moreover, it suffers from an overestimation for building height if shadow areas are mixed up with occluded areas, vegetation or roads.
IEEE Geoscience and Remote Sensing Magazine | 2015
Mostapha Harb; Daniele De Vecchi; Fabio Dell'Acqua