Tomás Filler
Binghamton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tomás Filler.
information hiding | 2011
Patrick Bas; Tomás Filler; Tomáš Pevný
This paper summarizes the first international challenge on steganalysis called BOSS (an acronym for Break Our Steganographic System). We explain the motivations behind the organization of the contest, its rules together with reasons for them, and the steganographic algorithm developed for the contest. Since the image databases created for the contest significantly influenced the development of the contest, they are described in a great detail. Paper also presents detailed analysis of results submitted to the challenge. One of the main difficulty the participants had to deal with was the discrepancy between training and testing source of images - the so-called cover-source mismatch, which forced the participants to design steganalyzers robust w.r.t. a specific source of images. We also point to other practical issues related to designing steganographic systems and give several suggestions for future contests in steganalysis.
IEEE Transactions on Information Forensics and Security | 2011
Tomás Filler; Jan Judas; Jessica J. Fridrich
This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of per-element distortions. Both the payload-limited sender (minimizing the total distortion while embedding a fixed payload) and the distortion-limited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel syndrome-coding scheme based on dual convolutional codes equipped with the Viterbi algorithm. This fast and very versatile solution achieves state-of-the-art results in steganographic applications while having linear time and space complexity w.r.t. the number of cover elements. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework.
Proceedings of SPIE | 2009
Miroslav Goljan; Jessica J. Fridrich; Tomás Filler
This paper presents a large scale test of camera identification from sensor fingerprints. To overcome the problem of acquiring a large number of cameras and taking the images, we utilized Flickr, an existing on-line image sharing site. In our experiment, we tested over one million images spanning 6896 individual cameras covering 150 models. The gathered data provides practical estimates of false acceptance and false rejection rates, giving us the opportunity to compare the experimental data with theoretical estimates. We also test images against a database of fingerprints, simulating thus the situation when a forensic analyst wants to find if a given image belongs to a database of already known cameras. The experimental results set a lower bound on the performance and reveal several interesting new facts about camera fingerprints and their impact on error analysis in practice. We believe that this study will be a valuable reference for forensic investigators in their effort to use this method in court.
conference on security steganography and watermarking of multimedia contents | 2007
Jessica J. Fridrich; Tomás Filler
In this paper, we propose a general framework and practical coding methods for constructing steganographic schemes that minimize the statistical impact of embedding. By associating a cost of an embedding change with every element of the cover, we first derive bounds on the minimum theoretically achievable embedding impact and then propose a framework to achieve it in practice. The method is based on syndrome codes with low-density generator matrices (LDGM). The problem of optimally encoding a message (e.g., with the smallest embedding impact) requires a binary quantizer that performs near the rate-distortion bound. We implement this quantizer using LDGM codes with a survey propagation message-passing algorithm. Since LDGM codes are guaranteed to achieve the rate-distortion bound, the proposed methods are guaranteed to achieve the minimal embedding impact (maximal embedding efficiency). We provide detailed technical description of the method for practitioners and demonstrate its performance on matrix embedding.
IEEE Transactions on Information Forensics and Security | 2010
Tomás Filler; Jessica J. Fridrich
We make a connection between steganography design by minimizing embedding distortion and statistical physics. The unique aspect of this work and one that distinguishes it from prior art is that we allow the distortion function to be arbitrary, which permits us to consider spatially dependent embedding changes. We provide a complete theoretical framework and describe practical tools, such as the thermodynamic integration for computing the rate-distortion bound and the Gibbs sampler for simulating the impact of optimal embedding schemes and constructing practical algorithms. The proposed framework reduces the design of secure steganography in empirical covers to the problem of finding local potentials for the distortion function that correlate with statistical detectability in practice. By working out the proposed methodology in detail for a specific choice of the distortion function, we experimentally validate the approach and discuss various options available to the steganographer in practice.
Proceedings of SPIE | 2010
Tomás Filler; Jan Judas; Jessica J. Fridrich
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
Proceedings of SPIE | 2011
Tomás Filler; Jessica J. Fridrich
Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance. However, the distortion functions are designed heuristically and the resulting steganographic algorithms are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters with respect to a chosen detection metric and feature space. We show that the size of the margin between support vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.
international conference on image processing | 2008
Tomás Filler; Jessica J. Fridrich; Miroslav Goljan
Sensor photo-response non-uniformity (PRMJ) was introduced by Lukas et al. [1] to solve the problem of digital camera sensor identification. The PRNU is the main component of a camera fingerprint that can reliably identify a specific camera. This fingerprint can be estimated from multiple images taken by the camera. In this paper, we demonstrate that the same fingerprint can be used for identification of camera brand and model. This is possible due to the fact that fingerprints estimated from images in the TIFF/JPEG format contain local structure due to various in-camera processing that can be detected by extracting a set of numerical features from the fingerprints and classifying them using pattern classification methods. We estimate and classify fingerprints for more than 4500 digital cameras spanning 8 different brands and 17 models. The average probability of correctly classified camera brand was 90.8%.
information hiding | 2009
Tomás Filler; Jessica J. Fridrich
Most practical stegosystems for digital media work by applying a mutually independent embedding operation to each element of the cover. For such stegosystems, the Fisher information w.r.t. the change rate is a perfect security descriptor equivalent to KL divergence between cover and stego images. Under the assumption of Markov covers, we derive a closed-form expression for the Fisher information and show how it can be used for comparing stegosystems and optimizing their performance. In particular, using an analytic cover model fit to experimental data obtained from a large number of natural images, we prove that the ±1 embedding operation is asymptotically optimal among all mutually independent embedding operations that modify cover elements by at most 1.
international workshop on information forensics and security | 2009
Tomás Filler; Jessica J. Fridrich
Wet paper codes are an essential tool for communication with non-shared selection channels. Inspired by the recent ZZW construction for matrix embedding, we propose a novel wet paper coding scheme with high embedding efficiency. The performance is analyzed under the assumption that wet cover elements form an i.i.d. Bernoulli sequence. Attention is paid to implementation details to minimize capacity loss in practice.