Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabio Pacifici is active.

Publication


Featured researches published by Fabio Pacifici.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2014

Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest

Christian Debes; Andreas Merentitis; Roel Heremans; Jürgen T. Hahn; Nikolaos Frangiadakis; Tim Van Kasteren; Wenzhi Liao; Rik Bellens; Aleksandra Pizurica; Sidharta Gautama; Wilfried Philips; Saurabh Prasad; Qian Du; Fabio Pacifici

The 2013 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society aimed at investigating the synergistic use of hyperspectral and Light Detection And Ranging (LiDAR) data. The data sets distributed to the participants during the Contest, a hyperspectral imagery and the corresponding LiDAR-derived digital surface model (DSM), were acquired by the NSF-funded Center for Airborne Laser Mapping over the University of Houston campus and its neighboring area in the summer of 2012. This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the fusion of spectral, spatial, and elevation information.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Multi-Modal Change Detection, Application to the Detection of Flooded Areas: Outcome of the 2009–2010 Data Fusion Contest

Nathan Longbotham; Fabio Pacifici; Taylor C. Glenn; Alina Zare; Michele Volpi; Devis Tuia; Emmanuel Christophe; Julien Michel; Jordi Inglada; Jocelyn Chanussot; Qian Du

The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.


IEEE Geoscience and Remote Sensing Letters | 2008

Urban Mapping Using Coarse SAR and Optical Data: Outcome of the 2007 GRSS Data Fusion Contest

Fabio Pacifici; F. Del Frate; William J. Emery; Paolo Gamba; Jocelyn Chanussot

The 2007 data fusion contest that was organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee was dealing with the extraction of a land use/land cover maps in and around an urban area, exploiting multitemporal and multisource coarse-resolution data sets. In particular, synthetic aperture radar and optical data from satellite sensors were considered. Excellent indicators for mapping accuracy were obtained by the top teams. The best algorithm is based on a neural classification enhanced by preprocessing and postprocessing steps.


IEEE Transactions on Geoscience and Remote Sensing | 2014

SVM Active Learning Approach for Image Classification Using Spatial Information

Edoardo Pasolli; Farid Melgani; Devis Tuia; Fabio Pacifici; William J. Emery

In the last few years, active learning has been gaining growing interest in the remote sensing community in optimizing the process of training sample collection for supervised image classification. Current strategies formulate the active learning problem in the spectral domain only. However, remote sensing images are intrinsically defined both in the spectral and spatial domains. In this paper, we explore this fact by proposing a new active learning approach for support vector machine classification. In particular, we suggest combining spectral and spatial information directly in the iterative process of sample selection. For this purpose, three criteria are proposed to favor the selection of samples distant from the samples already composing the current training set. In the first strategy, the Euclidean distances in the spatial domain from the training samples are explicitly computed, whereas the second one is based on the Parzen window method in the spatial domain. Finally, the last criterion involves the concept of spatial entropy. Experiments on two very high resolution images show the effectiveness of regularization in spatial domain for active learning purposes.


IEEE Geoscience and Remote Sensing Letters | 2010

Automatic Change Detection in Very High Resolution Images With Pulse-Coupled Neural Networks

Fabio Pacifici; F. Del Frate

A novel approach based on pulse-coupled neural networks (PCNNs) for image change detection is presented. PCNNs are based on the implementation of the mechanisms underlying the visual cortex of small mammals, and, with respect to more traditional NNs architectures, such as multilayer perceptron, own interesting advantages. In particular, they are unsupervised and context sensitive. This latter property may be particularly useful when very high resolution images are considered as, in this case, an object analysis might be more suitable than a pixel-based one. The qualitative and more quantitative results are reported. The performance of the algorithm has been evaluated on a pair of QuickBird images taken over the test area of Tor Vergata University, Rome.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013

Multi-Modal and Multi-Temporal Data Fusion: Outcome of the 2012 GRSS Data Fusion Contest

Christian Berger; Michael Voltersen; Robert Eckardt; Jonas Eberle; Thomas Heyer; Nesrin Salepci; Sören Hese; Christiane Schmullius; Junyi Tao; Stefan Auer; Richard Bamler; Ken Ewald; Michael G. Gartley; John Jacobson; Alan T. Buswell; Qian Du; Fabio Pacifici

The 2012 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society (GRSS) aimed at investigating the potential use of very high spatial resolution (VHR) multi-modal/multi-temporal image fusion. Three different types of data sets, including spaceborne multi-spectral, spaceborne synthetic aperture radar (SAR), and airborne light detection and ranging (LiDAR) data collected over the downtown San Francisco area were distributed during the Contest. This paper highlights the three awarded research contributions which investigate (i) a new metric to assess urban density (UD) from multi-spectral and LiDAR data, (ii) simulation-based techniques to jointly use SAR and LiDAR data for image interpretation and change detection, and (iii) radiosity methods to improve surface reflectance retrievals of optical data in complex illumination environments. In particular, they demonstrate the usefulness of LiDAR data when fused with optical or SAR data. We believe these interesting investigations will stimulate further research in the related areas.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2008

Monitoring Urban Land Cover in Rome, Italy, and Its Changes by Single-Polarization Multitemporal SAR Images

F. Del Frate; Fabio Pacifici; D. Solimini

This study contributes an assessment of the potential of single-polarization decametric synthetic aperture radar (SAR) images in classifying land cover within and around large urban areas and in monitoring their changes. The decision task is performed on a pixel basis and is carried out by supervised neural network algorithms fed by radar image features including backscattering intensity, coherence and textural parameters. Two configurations are considered: a short-term classification and change detection scheme intended for providing information in near-real time and a long-term scheme aimed at observing the urban changes at year time scales. We use a pair of interferometric images for the short-term case, while the long-term exercise utilizes two interferometric pairs and a fifth single acquisition. The images are acquired by the ERS SAR in late winter, spring and early summer over 836 square kilometers including Rome, Italy, and its surroundings. The accuracy of the short-term algorithm in discriminating seven types of surface is higher than 86%, while the accuracy of the long-term algorithm is beyond 88%. The many changes undergone by Rome from 1994 to 1999 have been identified by the postclassification comparison change detection procedure. The pixel-by-pixel analysis of the results has been carried out for a 160 square kilometers test area, obtaining a correct detection above 82% (less than 18% missed alarms and 0.3% false alarms).


Proceedings of the IEEE | 2015

Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

M. Dalla Mura; Saurabh Prasad; Fabio Pacifici; Paolo Gamba; Jocelyn Chanussot; Jon Atli Benediktsson

Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earths surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?


IEEE Transactions on Geoscience and Remote Sensing | 2014

The Importance of Physical Quantities for the Analysis of Multitemporal and Multiangular Optical Very High Spatial Resolution Images

Fabio Pacifici; Nathan Longbotham; William J. Emery

The analysis of multitemporal very high spatial resolution imagery is too often limited to the sole use of pixel digital numbers which do not accurately describe the observed targets between the various collections due to the effects of changing illumination, viewing geometries, and atmospheric conditions. This paper demonstrates both qualitatively and quantitatively that not only physically based quantities are necessary to consistently and efficiently analyze these data sets but also the angular information of the acquisitions should not be neglected as it can provide unique features on the scenes being analyzed. The data set used is composed of 21 images acquired between 2002 and 2009 by QuickBird over the city of Denver, Colorado. The images were collected near the downtown area and include single family houses, skyscrapers, apartment complexes, industrial buildings, roads/highways, urban parks, and bodies of water. Experiments show that atmospheric and geometric properties of the acquisitions substantially affect the pixel values and, more specifically, that the raw counts are significantly correlated to the atmospheric visibility. Results of a 22-class urban land cover experiment show that an improvement of 0.374 in terms of Kappa coefficient can be achieved over the base case of raw pixels when surface reflectance values are combined to the angular decomposition of the time series.


international geoscience and remote sensing symposium | 2011

Improving active learning methods using spatial information

Edoardo Pasolli; Farid Melgani; Devis Tuia; Fabio Pacifici; William J. Emery

Active learning process represents an interesting solution to the problem of training sample collection for the classification of remote sensing images. In this work, we propose a criterion based on the spatial information that can be used in combination with a spectral criterion in order to improve the selection of training samples. Experimental results obtained on a very high resolution image show the effectiveness of regularization in spatial domain and open challenging perspectives for terrain campaigns planning.

Collaboration


Dive into the Fabio Pacifici's collaboration.

Top Co-Authors

Avatar

Nathan Longbotham

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

William J. Emery

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Devis Tuia

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jocelyn Chanussot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Qian Du

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. Del Frate

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge