Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniele Cerra is active.

Publication


Featured researches published by Daniele Cerra.


Journal of Visual Communication and Image Representation | 2012

A fast compression-based similarity measure with applications to content-based image retrieval

Daniele Cerra; Mihai Datcu

Compression-based similarity measures are effectively employed in applications on diverse data types with a basically parameter-free approach. Nevertheless, there are problems in applying these techniques to medium-to-large datasets which have been seldom addressed. This paper proposes a similarity measure based on compression with dictionaries, the Fast Compression Distance (FCD), which reduces the complexity of these methods, without degradations in performance. On its basis a content-based color image retrieval system is defined, which can be compared to state-of-the-art methods based on invariant color features. Through the FCD a better understanding of compression-based techniques is achieved, by performing experiments on datasets which are larger than the ones analyzed so far in literature.


IEEE Geoscience and Remote Sensing Letters | 2014

Noise Reduction in Hyperspectral Images Through Spectral Unmixing

Daniele Cerra; Rupert Müller; Peter Reinartz

Spectral unmixing and denoising of hyperspectral images have always been regarded as separate problems. By considering the physical properties of a mixed spectrum, this letter introduces unmixing-based denoising, a supervised methodology representing any pixel as a linear combination of reference spectra in a hyperspectral scene. Such spectra are related to some classes of interest, and exhibit negligible noise influences, as they are averaged over areas for which ground truth is available. After the unmixing process, the residual vector is mostly composed by the contributions of uninteresting materials, unwanted atmospheric influences and sensor-induced noise, and is thus ignored in the reconstruction of each spectrum. The proposed method, in spite of its simplicity, is able to remove noise effectively for spectral bands with both low and high signal-to-noise ratio. Experiments show that this method could be used to retrieve spectral information from corrupted bands, such as the ones placed at the edge between ultraviolet and visible light frequencies, which are usually discarded in practical applications. The proposed method achieves better results in terms of visual quality in comparison to competitors, if the mean squared error is kept constant. This leads to questioning the validity of mean squared error as a predictor for image quality in remote sensing applications.


IEEE Geoscience and Remote Sensing Letters | 2010

Algorithmic Information Theory-Based Analysis of Earth Observation Images: An Assessment

Daniele Cerra; Alexandre Mallet; Lionel Gueguen; Mihai Datcu

Earth observation image-understanding methodologies may be hindered by the assumed data models and the estimated parameters on which they are often heavily dependent. First, the definition of the parameters may negatively affect the quality of the analysis. The parameters could not be captured in all aspects, and those resulting superfluous or not accurately tuned may introduce nuisance in the data. Furthermore, the diversity of the data, as regards sensor type, spatial, spectral, and radiometric resolution, and the variety and regularity of the observed scenes make it difficult to establish enough valid and robust statistical models to describe them. This letter proposes algorithmic information theory-based analysis as a valid solution to overcome these limitations. We will present different applications on satellite images, i.e., clustering, classification, artifact detection, and image time series mining, showing the generalization power of these parameter-free data-driven methods based on the computational complexity analysis.


data compression conference | 2008

A Model Conditioned Data Compression Based Similarity Measure

Daniele Cerra; Mihai Datcu

Many methodologies and similarity measures based on data compression have been recently introduced to compute similarities between general kinds of data. Two important similarity indices are the normalized information distance (NID), with its approximation normalized compression distance (NCD), and the pattern recognition based on data compression (PRDC). At first sight NCD and PRDC are quite different: the former is a direct metric while the latter is a methodology which computes a compression distance with an intermediate step of encoding files into texts. In spite of this, it is possible to demonstrate that they are both based on estimates of Kolmogorov complexities (when this is known for the former but not for the latter). Finally, this results in the definition of a new measure: the model conditioned data compression based similarity measure (McDCSM), which is a modified version of PRDC, and is the topic of this paper.


international geoscience and remote sensing symposium | 2014

Unmixing-based Denoising for Destriping and Inpainting of Hyperspectral Images

Daniele Cerra; Rupert Müller; Peter Reinartz

Unmixing-based Denoising exploits spectral unmixing results to selectively recover bands affected by a low Signal-to-Noise Ratio in hypespectral images. This paper proposes to apply this algorithm, which operates pixelwise, for the inpainting of corrupted pixels and the removal of drop-out artifacts in hy-perspectral scenes. The reported experiments are characterized by a low reconstruction error for the reconstructed spectra and a high visual quality of the processed images, and outperform state of the art methods in terms of reconstruction error.


Pattern Recognition Letters | 2014

Authorship analysis based on data compression

Daniele Cerra; Mihai Datcu; Peter Reinartz

This paper proposes to perform authorship analysis using the Fast Compression Distance (FCD), a similarity measure based on compression with dictionaries directly extracted from the written texts. The FCD computes a similarity between two documents through an effective binary search on the intersection set between the two related dictionaries. In the reported experiments the proposed method is applied to documents which are heterogeneous in style, written in five different languages and coming from different historical periods. Results are comparable to the state of the art and outperform traditional compression-based methods.


Entropy | 2011

Algorithmic Relative Complexity

Daniele Cerra; Mihai Datcu

Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence) found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.


Entropy | 2013

Expanding the Algorithmic Information Theory Frame for Applications to Earth Observation

Daniele Cerra; Mihai Datcu

Recent years have witnessed an increased interest towards compression-based methods and their applications to remote sensing, as these have a data-driven and parameter-free approach and can be thus succesfully employed in several applications, especially in image information mining. This paper expands the algorithmic information theory frame, on which these methods are based. On the one hand, algorithms originally defined in the pattern matching domain are reformulated, allowing a better understanding of the available compression-based tools for remote sensing applications. On the other hand, the use of existing compression algorithms is proposed to store satellite images with added semantic value.


Remote Sensing | 2016

Cultural Heritage Sites in Danger—Towards Automatic Damage Detection from Space

Daniele Cerra; Simon Plank; Vasiliki Lysandrou; Jiaojiao Tian

The intentional damage to local Cultural Heritage sites carried out in recent months by the Islamic State have received wide coverage from the media worldwide. Earth Observation data provide important information to assess this damage in such non-accessible areas, and automated image processing techniques will be needed to speed up the analysis if a fast response is desired. This paper shows the first results of applying fast and robust change detection techniques to sensitive areas, based on the extraction of textural information and robust differences of brightness values related to pre- and post-disaster satellite images. A map highlighting potentially damaged buildings is derived, which could help experts at timely assessing the damages to the Cultural Heritage sites of interest. Encouraging results are obtained for two archaeological sites in Syria and Iraq.


data compression conference | 2009

Algorithmic Cross-Complexity and Relative Complexity

Daniele Cerra; Mihai Datcu

Information content and compression are tightly related concepts that can be addressed by classical and algorithmic information theory. Several entities in the latter have been defined relying upon notions of the former, such as entropy and mutual information, since the basic concepts of these two approaches present many common tracts. In this work we further expand this parallelism by defining the algorithmic versions of cross-entropy and relative entropy (or Kullback-Leiblerdivergence), two well-known concepts in classical information theory. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when using such a description for x, with respect to its shortest representation. Since the main drawback of these concepts is their uncomputability, a suitable approximation based on data compression is derived for both and applied to real data. This allows us to improve the results obtained by similar previous methods which were intuitively defined.

Collaboration


Dive into the Daniele Cerra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mihai Datcu

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Diofantos G. Hadjimitsis

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar

Athos Agapiou

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vasiliki Lysandrou

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar

Nicola Masini

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Rosa Lasaponara

National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge