Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frédéric Deguillaume is active.

Publication


Featured researches published by Frédéric Deguillaume.


international conference on multimedia computing and systems | 1999

Template based recovery of Fourier-based watermarks using log-polar and log-log maps

Shelby Pereira; Joseph Ó Ruanaidh; Frédéric Deguillaume; Gabriela Otilia Csurka; Thierry Pun

Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyrighted material. The paper describes a method for the secure and robust copyright protection of digital images. We present an approach for embedding a digital watermark into an image using the fast Fourier transform. To this watermark is added a template in the Fourier transform domain, to render the method robust against rotations and scaling, or aspect ratio changes. We detail an algorithm based on the log-polar or log-log maps for the accurate and efficient recovery of the template in a rotated and scaled image. We also present results which demonstrate the robustness of the method against some common image processing operations such as compression, rotation, scaling and aspect ratio changes.


electronic imaging | 1999

Robust 3D DFT video watermarking

Frédéric Deguillaume; Gabriela Otilia Csurka; Joséph John O'Ruanaidh; Thierry Pun

This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.


Signal Processing | 2003

Secure hybrid robust watermarking resistant against tampering and copy attack

Frédéric Deguillaume; Sviatoslav Voloshynovskiy; Thierry Pun

Digital watermarking appears today as an efficient mean of securing multimedia documents. Several application scenarios in the security of digital watermarking have been pointed out, each of them with different requirements. The three main identified scenarios are: copyright protection, i.e. protecting ownership and usage rights; tamper proofing, aiming at detecting malicious modifications; and authentication, the purpose of which is to check the authenticity of the originator of a document. While robust watermarks, which survive to any change or alteration of the protected documents, are typically used for copyright protection, tamper proofing and authentication generally require fragile or semi-fragile watermarks in order to detect modified or faked documents. Further, most of robust watermarking schemes are vulnerable to the so-called copy attack, where a watermark can be copied from one document to another by any unauthorized person, making these schemes inefficient in all authentication applications. In this paper, we propose a hybrid watermarking method joining a robust and a fragile or semi-fragile watermark, and thus combining copyright protection and tamper proofing. As a result this approach is at the same time resistant against copy attack. In addition, the fragile information is inserted in a way which preserves robustness and reliability of the robust part. The numerous tests and the results obtained according to the Stirmark benchmark demonstrate the superior performance of the proposed approach.


international conference on image processing | 2001

Multibit digital watermarking robust against local nonlinear geometrical distortions

Sviatoslav Voloshynovskiy; Frédéric Deguillaume; Thierry Pun

This paper presents an efficient method for the estimation and recovering from nonlinear or local geometrical distortions, such as the random bending attack and restricted projective transforms. The distortions are modeled as a set of local affine transforms, the watermark being repeatedly allocated into small blocks in order to ensure its locality. The estimation of the affine transform parameters is formulated as a robust penalized maximum likelihood (ML) problem, which is suitable for the local level as well as for global distortions. Results with the Stirmark benchmark confirm the high robustness of the proposed method and show its state-of-the-art performance.


information hiding | 1999

A Bayesian Approach to Affine Transformation Resistant Image and Video Watermarking

Gabriella Csurka; Frédéric Deguillaume; Joseph Ó Ruanaidh; Thierry Pun

This paper proposes a new approach for assessing the presence of a digital watermark in images and videos. This approach relies on a Bayesian formulation that allows to compute the probability that a watermark was generated using a given key. The watermarking itself relies on the discrete Fourier transform (DFT) of the image, of video frames or of three dimensional chunks of video scene.


electronic imaging | 2002

Method for the estimation and recovering from general affine transforms in digital watermarking applications

Frédéric Deguillaume; Sviatoslav Voloshynovskiy; Thierry Pun

An important problem constraining the practical exploitation of robust watermarking technologies is the low robustness of the existing algorithms against geometrical distortions such as rotation, scaling, cropping, translation, change of aspect ratio and shearing. All these attacks can be uniquely described by general affine transforms. In this work, we propose a robust estimation method using apriori known regularity of a set of points. These points can be typically local maxima, or peaks, resulting either from the autocorrelation function (ACF) or from the magnitude spectrum (MS) generated by periodic patterns, which result in regularly aligned and equally spaced points. This structure is kept under any affine transform. The estimation of affine transform parameters is formulated as a robust penalized Maximum Likelihood (ML) problem. We propose an efficient approximation of this problem based on Hough transform (HT) or Radon transform (RT), which are known to be very robust in detecting alignments, even when noise is introduced by misalignments of points, missing points, or extra points. The high efficiency of the method is demonstrated even when severe degradations have occurred, including JPEG compression with a quality factor of 50%, where other known algorithms fail. Results with the Stirmark benchmark confirm the high robustness of the proposed method.


Multimedia Storage and Archiving Systems II | 1997

Video segmentation and camera motion characterization using compressed data

Ruggero Milanese; Frédéric Deguillaume; Alain Jacot-Descombes

We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.


Real-time Imaging | 1999

Efficient Segmentation and Camera Motion Indexing of Compressed Video

Ruggero Milanese; Frédéric Deguillaume; Alain Jacot-Descombes

In order to provide sophisticated access methods to the contents of video servers, it is necessary to automatically process and represent each video through a number of visual indexes. We focus on two tasks, namely the hierarchical representation of a video as a sequence of uniform segments (shots), and the characterization of each shot by a vector describing the camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analysing motion vectors. Adaptability to different compression qualities is achieved by learning different classification masks. For the second task, the optical flow is processed in order to distinguish between stationary and moving shots. A least-squares fitting procedure determines the pan/tilt/zoom camera parameters within shots that present regular motion. Each shot is then indexed by a vector representing the dominant motion components and the type of motion. In order to maximize processing speed, all techniques directly process and analyse MPEG-1 motion vectors, without the need for video decompression. An overall processing rate of 59 frames/s is achieved on software. The successful classification performance, evaluated on various news video clips for a total of 61 023 frames, attains 97.7% for the shot segmentation, 88.4% for the stationary vs. moving shot classification, and 94.7% for the detailed camera motion characterization.


electronic imaging | 2003

Data hiding capacity analysis for real images based on stochastic nonstationary geometrical models

Sviatoslav Voloshynovskiy; Oleksiy J. Koval; Frédéric Deguillaume; Thierry Pun

In this paper we consider the problem of capacity analysis in the framework of information-theoretic model of data hiding. Capacity is determined by the stochastic model of the host image, by the distortion constraints and by the side information about watermarking channel state available at the encoder and at the decoder. We emphasize the importance of proper modeling of image statistics and outline the possible decrease in the expected fundamental capacity limits, if there is a mismatch between the stochastic image model used in the hider/attacker optimization game and the actual model used by the attacker. To obtain a realistic estimation of pssible embedding rates we propose a novel stochastic non-stationary image model that is based on geometrical priors. This model outperforms the previously analyzed EQ and spike models in reference application such as denoising. Finally, we demonstrate how the proposed model influences the estimation of capacity for real images. We extend our model to different transform domains that include orthogonal, biorthogonal and overcomplete data representations.


Proceedings of SPIE | 2001

Optimal adaptive diversity watermarking with channel state estimation

Sviatoslav Voloshynovskiy; Frédéric Deguillaume; Shelby Pereira; Thierry Pun

This work advocates the formulation of digital watermarking as a communication problem. We consider watermarking as communication with side information available for both encoder and decoder. A generalized watermarking channel is considered that includes geometrical attacks, fading and additive non-Gaussian noise. The optimal encoding/decoding scenario is discussed for the generalized watermarking channel.

Collaboration


Dive into the Frédéric Deguillaume's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge