Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julien Reichel is active.

Publication


Featured researches published by Julien Reichel.


IEEE Transactions on Image Processing | 2001

Integer wavelet transform for embedded lossy to lossless image compression

Julien Reichel; Gloria Menegaz; Marcus J. Nadenau; Murat Kunt

The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.


Signal Processing-image Communication | 2002

Performance comparison of masking models based on a new psychovisual test method with natural scenery stimuli

Marcus J. Nadenau; Julien Reichel; Murat Kunt

Various image processing applications exploit a model of the human visual system (HVS). One element of HVS-models describes the masking-effect, which is typically parameterized by psycho-visual experiments that employ superimposed sinusoidal stimuli. Those stimuli are oversimplified with respect to real images and can capture only very elementary masking-effects. To overcome these limitations a new psycho-visual test method is proposed. It is based on natural scenery stimuli and operates in the wavelet domain. The collected psycho-visual data is finally used to evaluate the performance of various masking models under conditions as found in real image processing applications like compression.


international conference on image processing | 2001

On the arithmetic and bandwidth complexity of the lifting scheme

Julien Reichel

The lifting scheme (LS) is a very efficient implementation of the discrete wavelet transform (DWT). We compute the arithmetic gain realized when the LS is used instead of conventional filter banks. It is shown that, contrary to what was presented in the original work from W. Sweldens (see Appl. Comput. Harmon. Anal., vol.3, no.2, p.186-200, 1996), a gain of four is possible. However, the LS should be used with care as it can increase the memory bandwidth. Some implementations are presented together with their impact on the bandwidth. By using a common buffer for all filters, the bandwidth can be reduced to the case of the polyphase implementation. Using the method presented in this paper allows a memory bandwidth efficient implementation of the LS.


IEEE Transactions on Image Processing | 2002

Visually improved image compression by combining a conventional wavelet-codec with texture modeling

Marcus J. Nadenau; Julien Reichel; Murat Kunt

Human observers are very sensitive to a loss of image texture in photo-realistic images. For example a portrait image without the fine skin texture appears unnatural. Once the image is decomposed by a wavelet transformation, this texture is represented by many wavelet coefficients of low- and medium-amplitude. The conventional encoding of all these coefficients is very bitrate expensive. Instead, such an unstructured or stochastic texture can be modeled by a noise process and be characterized with very few parameters. Thus, a hybrid scheme can be designed that encodes the structural image information by a conventional wavelet codec and the stochastic texture in a model-based manner. Such a scheme, called WITCH (Wavelet-based Image/Texture Coding Hybrid), is proposed. It implements such an hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. Its low computational complexity and the parameter coding costs of only 0.01 bpp make it a valuable extension of conventional codecs. A comparison with the JPEG2000 image compression standard showed that the WITCH-scheme achieves the same subjective quality while increasing the compression ratio by more than a factor of two.


international conference on image processing | 2005

Opening the Laplacian pyramid for video coding

Diego Santa-Cruz; Julien Reichel; Francesco Ziliani

Laplacian pyramids are used in the framework of scalable video coding to efficiently provide spatial scalability. In literature, closed-loop schemes are adopted to implement such codecs. Closed-loop introduces complexity and optimization drawbacks. In this paper we investigate mechanisms to provide an open loop version of the Laplacian pyramid. We call our approach the Laplacian pyramid with update (LPU). In this scheme, different filter-banks can be used: we propose an intuitive framework to design filters that provides efficient video coding. The interesting result of this work is that LPU enables coding performances at least equivalent to those reported for closed loop Laplacian pyramids with advantages both in terms of complexity, rate-distortion optimization and independence of the qualities of the different resolutions.


international conference on image processing | 1996

Image quality prediction for bitrate allocation

Pascal Fleury; Julien Reichel; Touradj Ebrahimi

In image coding, the choice of a good image coding algorithm is very dependent on the image content. Based on this fact, dynamic coding algorithms have been designed. They try to find an optimal coding scheme for each image segment. They rely on an exhaustive search of the best coding algorithm. Evaluation of all algorithms is computationally very intensive and strongly limits the number of considered algorithms for a given application. Therefore, current standards rely on a single coding algorithm. This paper investigates a way to predict the coding quality from the image content. This prediction is based on a neural network. The coding quality is computed from image region features. Those features are easy and fast to compute, and are common to the whole set of considered coding algorithms. Therefore, the choice of the best algorithm can be based on those predicted coding qualities, and does not require the computation of all coding algorithms. The system is also fast enough to be used for dynamic bitrate allocation, and a simple algorithm to do this is proposed.


international conference on image processing | 2002

Effective integration of object tracking in a video coding scheme for multisensor surveillance systems

Francesco Ziliani; Julien Reichel

This paper deals with the interaction and integration of two important elements of most video surveillance systems: object tracking and video coding. Both elements are open research topics and in general they are studied separately. In the domain of video surveillance the convergence of the two subjects can be extremely important to build viable applications in a number of emerging security scenarios. In this context we propose a spatio-temporal filter based on object tracking that improves coding performances and post-coding analysis. The filter is an alternative to complex object-based video codecs in many video surveillance scenarios. It increases the spatio-temporal redundancies of input signal discarding noise and unwanted moving objects. We demonstrate that the compression performances of successive codecs are increased. Moreover, the decoded stream implicitly represents the regions of interest detected on the original signal.


electronic imaging | 1999

Compression of color images with wavelets considering the HVS

Marcus J. Nadenau; Julien Reichel

In this paper we present a new wavelet-based coding scheme for the compression of color images at compression ratios up to 100:1. It is originally based on the LZC algorithm of Taubman. The main point of discussion in this paper is the color space used and the combination of a coding scheme with a model of human color vision. We describe tow approaches: one is based on the pattern-color separable opponent space described by Poirson-Wandell; the other is based on the YCbCr-space that is often used for compression. In this article we show the results of some psychovisual experiments we did to refine the model of the opponent space concerning its color contrast sensitivity function. These are necessary to use it for image compression. They consists of color matching experiments performed on a calibrated computer display. We discuss this particular opponent space concerning its fidelity of prediction for human perception and its characteristics in terms of compressibility. Finally we compare the quality of the coded images of our approach to Standard JPEG, DCTune 2.0 and the SPIHT coding scheme. We demonstrate that our coder outperforms these three coders in terms of visual quality.


visual communications and image processing | 2003

Comparison of texture coding algorithm in a unified motion prediction/compensation video compression algorithm

Julien Reichel; Francesco Ziliani

Efficient video compression is based on three principles: reduction of the temporal, spatial and statistical redundancy present in the video signal. Most video compression algorithm, (MPEGs, H.26x, ...) use the same principle to reduce the spatial redundancy, i.e. an 8x8 DCT transform. However there exist other transforms capable of similar results. Those are the integer 4x4 DCT and the wavelet transforms for instance. This article compare many transforms in the same global compression scheme, i.e the same motion estimation, compensation strategy and the same entropy coding. Moreover the tests are conducted on sequences of different nature, such as sport, video surveillance and movies. This allows a global performance comparison of those transforms in many different scenarios.


international conference on multimedia and expo | 2000

How to measure arithmetic complexity of compression algorithms: a simple solution

Julien Reichel; Marcus J. Nadenau

Image compression techniques appear to have matured during the past few years. Differences between the compression performance of different algorithms are very small. The key differences are now features such as embedded coding, regions of interest coding, bitstream manipulation or error resilience. However, there is one major difference present but only rarely discussed: algorithmic complexity. It can correspond to the number of arithmetic operations, memory demands and bandwidth or simply the difficulty of implementation. The performance of image compression algorithms are generally presented in terms of PSNR relative to the possible bitrates. It is interesting to consider a similar relationship in terms of complexity. Unfortunately the term complexity itself is not well defined. In this paper a methodology to measure arithmetic complexity (and eventually other types of complexity) of a complete compression algorithm is presented. The model is then applied to the ISO standard JPEG encoder.

Collaboration


Dive into the Julien Reichel's collaboration.

Top Co-Authors

Avatar

Marcus J. Nadenau

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Murat Kunt

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Valaeys

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Diego Santa-Cruz

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michel Bierlaire

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Oscar Divorra Escoda

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Pascal Fleury

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Pierre Vandergheynst

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge