Valentina Conotter
University of Trento
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Valentina Conotter.
acm sigmm conference on multimedia systems | 2015
Duc-Tien Dang-Nguyen; Cecilia Pasquini; Valentina Conotter; Giulia Boato
Digital forensics is a relatively new research area which aims at authenticating digital media by detecting possible digital forgeries. Indeed, the ever increasing availability of multimedia data on the web, coupled with the great advances reached by computer graphical tools, makes the modification of an image and the creation of visually compelling forgeries an easy task for any user. This in turns creates the need of reliable tools to validate the trustworthiness of the represented information. In such a context, we present here RAISE, a large dataset of 8156 high-resolution raw images, depicting various subjects and scenarios, properly annotated and available together with accompanying metadata. Such a wide collection of untouched and diverse data is intended to become a powerful resource for, but not limited to, forensic researchers by providing a common benchmark for a fair comparison, testing and evaluation of existing and next generation forensic algorithms. In this paper we describe how RAISE has been collected and organized, discuss how digital image forensics and many other multimedia research areas may benefit of this new publicly available benchmark dataset and test a very recent forensic technique for JPEG compression detection.
IEEE Transactions on Information Forensics and Security | 2012
Valentina Conotter; James F. O'Brien; Hany Farid
We describe a geometric technique to detect physically implausible trajectories of objects in video sequences. This technique explicitly models the three-dimensional ballistic motion of objects in free-flight and the two-dimensional projection of the trajectory into the image plane of a static or moving camera. Deviations from this model provide evidence of manipulation. The technique assumes that the objects trajectory is substantially influenced only by gravity, that the image of the objects center of mass can be determined from the images, and requires that any camera motion can be estimated from background elements. The computational requirements of the algorithm are modest, and any detected inconsistencies can be illustrated in an intuitive, geometric fashion. We demonstrate the efficacy of this analysis on videos of our own creation and on videos obtained from video-sharing websites.
IEEE Transactions on Information Forensics and Security | 2009
Giulia Boato; Valentina Conotter; F.G.B. De Natale; Claudio Fontanari
This paper presents a novel and flexible benchmarking tool based on genetic algorithms (GA) and designed to assess the robustness of any digital image watermarking system. The main idea is to evaluate robustness in terms of perceptual quality, measured by weighted peak signal-to-noise ratio. Through a stochastic approach, we optimize this quality metric, by finding the minimal degradation that needs to be introduced in a marked image in order to remove the embedded watermark. Given a set of attacks, chosen according to the considered application scenario, GA support the optimization of the parameters to be assigned to each processing operation, in order to obtain an unmarked image with perceptual quality as high as possible. Extensive experimental results demonstrate the effectiveness of the proposed evaluation tool.
international conference on image processing | 2010
Valentina Conotter; Giulia Boato; Hany Farid
The manipulation of text on a sign or billboard is relatively easy to do in a way that is perceptually convincing. When text is on a planar surface and imaged under perspective projection, the text undergoes a specific distortion. When text is manipulated, it is unlikely to precisely satisfy this geometric mapping. We describe a technique for detecting if text in an image obeys the expected perspective projection, deviations from which are used as evidence of tampering.
Proceedings of SPIE | 2011
A. Cortiana; Valentina Conotter; Giulia Boato; F.G.B. De Natale
Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.
multimedia signal processing | 2013
Duc-Tien Dang-Nguyen; I. D. Gebru; Valentina Conotter; Giulia Boato; F.G.B. De Natale
Median filtering is a well-known non linear denoising filter often used as an harmless post-processing, sometimes also employed to affect the reliability of some forensic techniques. In this work, we present a novel counter-forensic method able to conceal the characteristic traces left by median filtering. By exploiting the knowledge of features used in existing median filtering detectors, we are able to remove the characteristic footprints via suitable random pixel modification, while keeping the quality of the counter-attacked image high. Experimental results show that the proposed method is very effective, computationally efficient and competitive with other state-of-the-art techniques.
international conference on image processing | 2014
Valentina Conotter; E. Bodnari; Giulia Boato; Hany Farid
We describe a new forensic technique for distinguishing between computer generated and human faces in video. This technique identifies tiny fluctuations in the appearance of a face that result from changes in blood flow. Because these changes result from the human pulse, they are unlikely to be found in computer generated imagery. We use the absence or presence of this physiological signal to distinguish computer generated from human faces.
Proceedings of SPIE | 2011
Valentina Conotter; Lorenzo Cordin
Nowadays, sophisticated computer graphics editors lead to a significant increase in the photorealism of images. Thus, computer generated (CG) images result to be convincing and hard to be distinguished from real ones at a first glance. Here, we propose an image forensics technique able to automatically detect local forgeries, i.e., objects generated via computer graphics software inserted in natural images, and vice versa. We develop a novel hybrid classifier based on wavelet based features and sophisticated pattern noise statistics. Experimental results show the effectiveness of the proposed approach.
international workshop on digital watermarking | 2008
Giulia Boato; Valentina Conotter; Francesco G. B. De Natale
The present paper proposes a new flexible and effective evaluation tool based on genetic algorithms to test the robustness of digital image watermarking techniques. Given a set of possible attacks, the method finds the best possible un-watermarked image in terms of Weighted Peak Signal to Noise Ratio (WPSNR). In fact, it implements a stochastic search of the optimal parameters to be assigned to each processing operation in order to find the combined attack that removes the watermark while producing the smallest possible degradation of the image in terms of human perception. As a result, the proposed method makes it possible to assess the overall performance of a watermarking scheme, and to have an immediate feedback on its robustness to different attacks. A set of tests is presented, referring to the application of the tool to two known watermarking approaches.
international conference on image processing | 2013
Valentina Conotter; Pedro Comesaña; Fernando Pérez-González
The characteristic artifacts left in an image by JPEG compression are often exploited to gather information about the processing history of the content. However, linear image filtering, often applied as postprocessing to the entire image (full-frame) for enhancement, may alter these forensically significant features, thus complicating the application of the related forensics techniques. In this paper, we study the combination of JPEG compression and full-frame linear filtering, analyzing their impact on the Discrete Cosine Transform (DCT) statistical properties of the image. We derive an accurate mathematical framework that allows to fully characterize the probabilistic distributions of the DCT coefficients of the quantized and filtered image. We then exploit this knowledge to estimate the applied filter. Experimental results show the effectiveness of the proposed method.