Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pierre Moulin is active.

Publication


Featured researches published by Pierre Moulin.


IEEE Signal Processing Letters | 1999

Low-complexity image denoising based on statistical modeling of wavelet coefficients

M. Kivanc Mihcak; Igor Kozintsev; Kannan Ramchandran; Pierre Moulin

We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.


IEEE Transactions on Information Theory | 1999

Analysis of multiresolution image denoising schemes using generalized Gaussian and complexity priors

Pierre Moulin; Juan Liu

Research on universal and minimax wavelet shrinkage and thresholding methods has demonstrated near-ideal estimation performance in various asymptotic frameworks. However, image processing practice has shown that universal thresholding methods are outperformed by simple Bayesian estimators assuming independent wavelet coefficients and heavy-tailed priors such as generalized Gaussian distributions (GGDs). In this paper, we investigate various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors. In particular, we state a simple condition under which MAP estimates are sparse. We also introduce a new family of complexity priors based upon Rissanens universal prior on integers. One particular estimator in this class outperforms conventional estimators based on earlier applications of the minimum description length (MDL) principle. We develop analytical expressions for the shrinkage rules implied by GGD and complexity priors. This allows us to show the equivalence between universal hard thresholding, MAP estimation using a very heavy-tailed GGD, and MDL estimation using one of the new complexity priors. Theoretical analysis supported by numerous practical experiments shows the robustness of some of these estimates against mis-specifications of the prior-a basic concern in image processing applications.


international conference on image processing | 2000

Robust image hashing

Ramarathnam Venkatesan; S.-M. Koon; Mariusz H. Jakubowski; Pierre Moulin

The proliferation of digital images creates problems for managing large image databases, indexing individual images, and protecting intellectual property. This paper introduces a novel image indexing technique that may be called an image hash function. The algorithm uses randomized signal processing strategies for a non-reversible compression of images into random binary strings, and is shown to be robust against image changes due to compression, geometric distortions, and other attacks. This algorithm brings to images a direct analog of message authentication codes (MACs) from cryptography, in which a main goal is to make hash values on a set of distinct inputs pairwise independent. This minimizes the probability that two hash values collide, even, when inputs are generated by an adversary.


Proceedings of the IEEE | 2005

Data-Hiding Codes

Pierre Moulin; Ralf Koetter

This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics,and text. Such codes have also been called watermarking codes; they can be used in a variety of applications, including copyright protection for digital media, content authentication, media forensics, data binding, and covert communications. Some of these applications imply the presence of an adversary attempting to disrupt the transmission of information to the receiver; other applications involve a noisy, generally unknown, communication channel. Our focus is on the mathematical models, fundamental principles, and code design techniques that are applicable to data hiding. The approach draws from basic concepts in information theory, coding theory, game theory, and signal processing,and is illustrated with applications to the problem of hiding data in images.


IEEE Transactions on Image Processing | 2001

Information-theoretic analysis of interscale and intrascale dependencies between image wavelet coefficients

Juan Liu; Pierre Moulin

This paper presents an information-theoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance. Mutual information is computed analytically for several statistical image models, and depends strongly on the choice of wavelet filters. In the absence of an explicit statistical model, a method is studied for reliably estimating mutual information from image data. The validity of the model-based and data-driven approaches is assessed on representative real-world photographic images. Our results are consistent with empirical observations that coding schemes exploiting inter- and intrascale dependencies alone perform very well, whereas taking both into account does not significantly improve coding performance. A similar observation applies to other image processing applications.


international conference on computer vision | 2011

RGBD-HuDaAct: A color-depth video database for human daily activity recognition

Bingbing Ni; Gang Wang; Pierre Moulin

In this paper, we present a home-monitoring oriented human activity recognition benchmark database, based on the combination of a color video camera and a depth sensor. Our contributions are two-fold: 1) We have created a publicly releasable human activity video database (i.e., named as RGBD-HuDaAct), which contains synchronized color-depth video streams, for the task of human daily activity recognition. This database aims at encouraging more research efforts on human activity recognition based on multi-modality sensor combination (e.g., color plus depth). 2) Two multi-modality fusion schemes, which naturally combine color and depth information, have been developed from two state-of-the-art feature representation methods for action recognition, i.e., spatio-temporal interest points (STIPs) and motion history images (MHIs). These depth-extended feature representation methods are evaluated comprehensively and superior recognition performances over their uni-modality (e.g., color only) counterparts are demonstrated.


computer vision and pattern recognition | 2015

Deep hashing for compact binary codes learning

Venice Erin Liong; Jiwen Lu; Gang Wang; Pierre Moulin; Jie Zhou

In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.


IEEE Transactions on Image Processing | 2002

A framework for evaluating the data-hiding capacity of image sources

Pierre Moulin; Mehmet Kivanc Mihcak

An information-theoretic model for image watermarking and data hiding is presented in this paper. Previous theoretical results are used to characterize the fundamental capacity limits of image watermarking and data-hiding systems. Capacity is determined by the statistical model used for the host image, by the distortion constraints on the data hider and the attacker, and by the information available to the data hider, to the attacker, and to the decoder. We consider autoregressive, block-DCT, and wavelet statistical models for images and compute data-hiding capacity for compressed and uncompressed host-image sources. Closed-form expressions are obtained under sparse-model approximations. Models for geometric attacks and distortion measures that are invariant to such attacks are considered.


IEEE Transactions on Signal Processing | 1994

Wavelet thresholding techniques for power spectrum estimation

Pierre Moulin

Estimation of the power spectrum S(f) of a stationary random process can be viewed as a nonparametric statistical estimation problem. We introduce a nonparametric approach based on a wavelet representation for the logarithm of the unknown S(f). This approach offers the ability to capture statistically significant components of ln S(f) at different resolution levels and guarantees nonnegativity of the spectrum estimator. The spectrum estimation problem is set up as a problem of inference on the wavelet coefficients of a signal corrupted by additive non-Gaussian noise. We propose a wavelet thresholding technique to solve this problem under specified noise/resolution tradeoffs and show that the wavelet coefficients of the additive noise may be treated as independent random variables. The thresholds are computed using a saddle-point approximation to the distribution of the noise coefficients. >


IEEE Transactions on Information Forensics and Security | 2007

Optimized Feature Extraction for Learning-Based Image Steganalysis

Ying Wang; Pierre Moulin

The purpose of image steganalysis is to detect the presence of hidden messages in cover photographic images. Supervised learning is an effective and universal approach to cope with the twin difficulties of unknown image statistics and unknown steganographic codes. A crucial part of the learning process is the selection of low-dimensional informative features. We investigate this problem from three angles and propose a three-level optimization of the classifier. First, we select a subband image representation that provides better discrimination ability than a conventional wavelet transform. Second, we analyze two types of features-empirical moments of probability density functions (PDFs) and empirical moments of characteristic functions of the PDFs-and compare their merits. Third, we address the problem of feature dimensionality reduction, which strongly impacts classification accuracy. Experiments show that our method outperforms previous steganalysis methods. For instance, when the probability of false alarm is fixed at 1%, the stegoimage detection probability of our algorithm exceeds that of its closest competitor by at least 15% and up to 50%

Collaboration


Dive into the Pierre Moulin's collaboration.

Top Co-Authors

Avatar

Bingbing Ni

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gang Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John W. Woods

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge