Andrew D. Ker
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew D. Ker.
IEEE Signal Processing Letters | 2005
Andrew D. Ker
We consider the problem of detecting spatial domain least significant bit (LSB) matching steganography in grayscale images, which has proved much harder than for its counterpart, LSB replacement. We use the histogram characteristic function (HCF), introduced by Harmsen for the detection of steganography in color images but ineffective on grayscale images. Two novel ways of applying the HCF are introduced: calibrating the output using a downsampled image and computing the adjacency histogram instead of the usual histogram. Extensive experimental results show that the new detectors are reliable, vastly more so than those previously known.
electronic imaging | 2008
Andrew D. Ker; Rainer Böhme
This paper revisits the steganalysis method involving a Weighted Stego-Image (WS) for estimating LSB replacement payload sizes in digital images. It suggests new WS estimators, upgrading the methods three components: cover pixel prediction, least-squares weighting, and bias correction. Wide-ranging experimental results (over two million total attacks) based on images from multiple sources and pre-processing histories show that the new methods produce greatly improved accuracy, to the extent that they outperform even the best of the structural detectors, while avoiding their high complexity. Furthermore, specialised WS estimators can be derived for detection of sequentially-placed payload: they offer levels of accuracy orders of magnitude better than their competitors.
information hiding | 2005
Andrew D. Ker
There are many detectors for simple Least Significant Bit (LSB) steganography in digital images, the most sensitive of which make use of structural or combinatorial properties of the LSB embedding method. We give a general framework for detection and length estimation of these hidden messages, which potentially makes use of all the combinatorial structure. The framework subsumes some previously known structural detectors and suggests novel, more powerful detection algorithms. After presenting the general framework we give a detailed study of one particular novel detector, with experimental evidence that it is more powerful than those previously known, in most cases substantially so. However there are some outstanding issues to be solved for the wider application of the general framework.
information hiding | 2006
Andrew D. Ker
Conventional steganalysis aims to separate cover objects from stego objects, working on each object individually. In this paper we investigate some methods for pooling steganalysis evidence, so as to obtain more reliable detection of steganography in large sets of objects, and the dual problem of hiding information securely when spreading across a batch of covers. The results are rather surprising: in many situations, a steganographer should not spread the embedding across all covers, and the secure capacity increases only as the square root of the number of objects. We validate the theoretical results, which are rather general, by testing a particular type of image steganography. The experiments involve tens of millions of repeated steganalytic attacks and show that pooled steganalysis can give very reliable detection of even tiny proportionate payloads.
IEEE Transactions on Information Forensics and Security | 2012
Tomas Pevny; Jessica J. Fridrich; Andrew D. Ker
A quantitative steganalyzer is an estimator of the number of embedding changes introduced by a specific embedding operation. Since for most algorithms the number of embedding changes correlates with the message length, quantitative steganalyzers are important forensic tools. In this paper, a general method for constructing quantitative steganalyzers from features used in blind detectors is proposed. The core of the method is a support vector regression, which is used to learn the mapping between a feature vector extracted from the investigated object and the embedding change rate. To demonstrate the generality of the proposed approach, quantitative steganalyzers are constructed for a variety of steganographic algorithms in both JPEG transform and spatial domains. The estimation accuracy is investigated in detail and compares favorably with state-of-the-art quantitative steganalyzers.
acm workshop on multimedia and security | 2008
Andrew D. Ker; Tomáš Pevný; Jan Kodovský; Jessica J. Fridrich
There are a number of recent information theoretic results demonstrating (under certain conditions) a sublinear relationship between the number of cover objects and their total steganographic capacity. In this paper we explain how these results may be adapted to the steganographic capacity of a single cover object, which under the right conditions should be proportional to the square root of the cover size. Then we perform some experiments using three genuine steganography methods in digital images, covering both spatial and DCT domains. Measuring detectability under four different steganalysis methods, for a variety of payload and cover sizes, we observe close accordance with a square root law.
conference on security steganography and watermarking of multimedia contents | 2005
Andrew D. Ker
We consider the problem of detecting the presence of hidden data in colour bitmap images. Like straightforward LSB Replacement, LSB Matching (which randomly increments or decrements cover pixels to embed the hidden data in the least significant bits) is attractive because it is extremely simple to implement. It has proved much harder to detect than LSB Replacement because it does not introduce the same asymmetries into the stego image. We expand our recently-developed techniques for the detection of LSB Matching in grayscale images into the full-colour case. Not everything goes through smoothly but the end result is much improved detection, especially for cover images which have been stored as JPEG files, even if subsequently resampled. Evaluation of steganalysis statistics is performed using a distributed steganalysis project. Because evaluation of reliability of detectors for LSB Matching is limited, we begin with a review of the previously-known detectors.
Proceedings of SPIE | 2009
Andrew D. Ker; Ivans Lubenko
WAM steganalysis is a feature-based classifier for detecting LSB matching steganography, presented in 2006 by Goljan et al. and demonstrated to be sensitive even to small payloads. This paper makes three contributions to the development of the WAM method. First, we benchmark some variants of WAM in a number of sets of cover images, and we are able to quantify the significance of differences in results between different machine learning algorithms based on WAM features. It turns out that, like many of its competitors, WAM is not effective in certain types of cover, and furthermore it is hard to predict which types of cover are suitable for WAM steganalysis. Second, we demonstrate that only a few the features used in WAM steganalysis do almost all of the work, so that a simplified WAM steganalyser can be constructed in exchange for a little less detection power. Finally, we demonstrate how the WAM method can be extended to provide forensic tools to identify the location (and potentially content) of LSB matching payload, given a number of stego images with payload placed in the same locations. Although easily evaded, this is a plausible situation if the same stego key is mistakenly re-used for embedding in multiple images.
conference on security steganography and watermarking of multimedia contents | 2006
Rainer Böhme; Andrew D. Ker
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
acm workshop on multimedia and security | 2008
Andrew D. Ker
The literature now contains a number of highly-sensitive detectors for LSB replacement steganography in digital images. They can also estimate the size of the embedded payload, but cannot locate it. In this short paper we demonstrate that the Weighted Stego-image (WS) steganalysis method can be adapted to locate payload, if a large number of images have the payload embedded in the same locations. Such a situation is plausible if the same embedding key is reused for different images, and the technique presented here may be of use to forensic investigators. As long as a few hundred stego images are available, near-perfect location of the payloads can be achieved.