Luisa Verdoliva
University of Naples Federico II
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luisa Verdoliva.
IEEE Transactions on Geoscience and Remote Sensing | 2012
Sara Parrilli; Mariana Poderico; Cesario Vincenzo Angelino; Luisa Verdoliva
We propose a novel despeckling algorithm for synthetic aperture radar (SAR) images based on the concepts of nonlocal filtering and wavelet-domain shrinkage. It follows the structure of the block-matching 3-D algorithm, recently proposed for additive white Gaussian noise denoising, but modifies its major processing steps in order to take into account the peculiarities of SAR images. A probabilistic similarity measure is used for the block-matching step, while the wavelet shrinkage is developed using an additive signal-dependent noise model and looking for the optimum local linear minimum-mean-square-error estimator in the wavelet domain. The proposed technique compares favorably w.r.t. several state-of-the-art reference techniques, with better results both in terms of signal-to-noise ratio (on simulated speckled images) and of perceived image quality.
IEEE Transactions on Geoscience and Remote Sensing | 2014
Gerardo Di Martino; Mariana Poderico; Giovanni Poggi; Daniele Riccio; Luisa Verdoliva
Objective performance assessment is a key enabling factor for the development of better and better image processing algorithms. In synthetic aperture radar (SAR) despeckling, however, the lack of speckle-free images precludes the use of reliable full-reference measures, leaving the comparison among competing techniques on shaky bases. In this paper, we propose a new framework for the objective (quantitative) assessment of SAR despeckling techniques, based on simulation of SAR images relevant to canonical scenes. Each image is generated using a complete SAR simulator that includes proper physical models for the sensed surface, the scattering, and the radar operational mode. Therefore, in the limits of the simulation models, the employed simulation procedure generates reliable and meaningful SAR images with controllable parameters. Through simulating multiple SAR images as different instances relevant to the same scene we can therefore obtain, a true multilook full-resolution SAR image, with an arbitrary number of looks, thus generating (by definition) the closest object to a clean reference image. Based on this concept, we build a full performance assessment framework by choosing a suitable set of canonical scenes and corresponding objective measures on the SAR images that consider speckle suppression and feature preservation. We test our framework by studying the performance of a representative set of actual despeckling algorithms; we verify that the quantitative indications given by numerical measures are always fully consistent with the rationale specific of each despeckling technique, strongly agrees with qualitative (expert) visual inspections, and provide insight into SAR despeckling approaches.
IEEE Signal Processing Magazine | 2014
Charles-Alban Deledalle; Loïc Denis; Giovanni Poggi; Florence Tupin; Luisa Verdoliva
Most current synthetic aperture radar (SAR) systems offer high-resolution images featuring polarimetric, interferometric, multifrequency, multiangle, or multidate information. SAR images, however, suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed over the last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multichannel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity and estimating the parameters of interest from a collection of similar patches.
IEEE Transactions on Information Forensics and Security | 2014
Giovanni Chierchia; Giovanni Poggi; Carlo Sansone; Luisa Verdoliva
Graphics editing programs of the last generation provide ever more powerful tools, which allow for the retouching of digital images leaving little or no traces of tampering. The reliable detection of image forgeries requires, therefore, a battery of complementary tools that exploit different image properties. Techniques based on the photo-response non-uniformity (PRNU) noise are among the most valuable such tools, since they do not detect the inserted object but rather the absence of the camera PRNU, a sort of camera fingerprint, dealing successfully with forgeries that elude most other detection strategies. In this paper, we propose a new approach to detect image forgeries using sensor pattern noise. Casting the problem in terms of Bayesian estimation, we use a suitable Markov random field prior to model the strong spatial dependences of the source, and take decisions jointly on the whole image rather than individually for each pixel. Modern convex optimization techniques are then adopted to achieve a globally optimal solution and the PRNU estimation is improved by resorting to nonlocal denoising. Large-scale experiments on simulated and real forgeries show that the proposed technique largely improves upon the current state of the art, and that it can be applied with success to a wide range of practical situations.
IEEE Transactions on Information Forensics and Security | 2015
Davide Cozzolino; Giovanni Poggi; Luisa Verdoliva
We propose a new algorithm for the accurate detection and localization of copy-move forgeries, based on rotation-invariant features computed densely on the image. Dense-field techniques proposed in the literature guarantee a superior performance with respect to their keypoint-based counterparts, at the price of a much higher processing time, mostly due to the feature matching phase. To overcome this limitation, we resort here to a fast approximate nearest-neighbor search algorithm, PatchMatch, especially suited for the computation of dense fields over images. We adapt the matching algorithm to deal efficiently with invariant features, so as to achieve higher robustness with respect to rotations and scale changes. Moreover, leveraging on the smoothness of the output field, we implement a simplified and reliable postprocessing procedure. The experimental analysis, conducted on databases available online, proves the proposed technique to be at least as accurate, generally more robust, and typically much faster than the state-of-the-art dense-field references.
IEEE Transactions on Information Forensics and Security | 2015
Diego Gragnaniello; Giovanni Poggi; Carlo Sansone; Luisa Verdoliva
Biometric authentication systems are quite vulnerable to sophisticated spoofing attacks. To keep a good level of security, reliable spoofing detection tools are necessary, preferably implemented as software modules. The research in this field is very active, with local descriptors, based on the analysis of microtextural features, gaining more and more popularity, because of their excellent performance and flexibility. This paper aims at assessing the potential of these descriptors for the liveness detection task in authentication systems based on various biometric traits: fingerprint, iris, and face. Besides compact descriptors based on the independent quantization of features, already considered for some liveness detection tasks, we will study promising descriptors based on the joint quantization of rich local features. The experimental analysis, conducted on publicly available data sets and in fully reproducible modality, confirms the potential of these tools for biometric applications, and points out possible lines of development toward further improvements.
Remote Sensing | 2016
Giuseppe Masi; Davide Cozzolino; Luisa Verdoliva; Giuseppe Scarpa
A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.
IEEE Geoscience and Remote Sensing Letters | 2014
Davide Cozzolino; Sara Parrilli; Giuseppe Scarpa; Giovanni Poggi; Luisa Verdoliva
Despeckling techniques based on the nonlocal approach provide an excellent performance, but exhibit also a remarkable complexity, unsuited to time-critical applications. In this letter, we propose a fast nonlocal despeckling filter. Starting from the recent SAR-BM3D algorithm, we propose to use a variable-size search area driven by the activity level of each patch, and a probabilistic early termination approach that exploits speckle statistics in order to speed up block matching. Finally, the use of look-up tables helps in further reducing the processing costs. The technique proposed conjugates excellent performance and low complexity, as demonstrated on both simulated and real-world SAR images and on a dedicated SAR despeckling benchmark.
international conference on image processing | 2004
Marco Cagnazzo; Giovanni Poggi; Luisa Verdoliva; Andrea Zinicola
We present a new technique for the compression of remote-sensing hyperspectral images based on wavelet transform and zerotree coding of coefficients. In order to improve encoding efficiency, the image is first segmented in a small number of regions with homogeneous texture. Then, a shape-adaptive wavelet transform is carried out on each region and the resulting coefficients are finally encoded by a shape-adaptive version of SPIHT. Thanks to the segmentation map (sent as a side information) region boundaries are faithfully preserved and selective encoding strategies can be easily implemented. In addition, by-now homogeneous region textures can be more efficiently encoded.
IEEE Transactions on Image Processing | 2007
Marco Cagnazzo; Giovanni Poggi; Luisa Verdoliva
We propose a new efficient region-based scheme for the compression of multispectral remote-sensing images. The region-based description of an image comprises a segmentation map, which singles out the relevant regions and provides their main features, followed by the detailed (possibly lossless) description of each region. The map conveys information on the image structure and could even be the only item of interest for the user; moreover, it enables the user to perform a selective download of the regions of interest, or can be used for high-level data mining and retrieval applications. This approach, with the multiple pieces of information required, may seem inherently inefficient. The goal of this research is to show that, by carefully selecting the appropriate segmentation and coding tools, region-based compression of multispectral images can be also effective in a rate-distortion sense, thus providing an image description that is both insightful and efficient. To this end, we define a generic coding scheme, based on Bayesian image segmentation and on transform coding, where several key design choices, however, are left open for optimization, from the type of transform, to the rate allocation procedure, and so on. Then, through an extensive experimental phase on real-world multispectral images, we gain insight on such key choices, and finally single out an efficient and robust coding scheme, with Bayesian segmentation, class-adaptive Karhunen-Loeve spectral transform, and shape-adaptive wavelet spatial transform, which outperforms state-of-the-art and carefully tuned conventional techniques, such as JPEG-2000 multicomponent or SPIHT-based coders.