Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rene Klein Gunnewiek is active.

Publication


Featured researches published by Rene Klein Gunnewiek.


international conference on image processing | 2005

The role of the virtual channel in distributed source coding of video

Ronald P. Westerlaken; Rene Klein Gunnewiek; Reginald L. Lagendijk

In distributed video source coding side-information at the decoder is generated as a temporal prediction based on previous frames. This creates a virtual dependency channel between the source video at the encoder and the side information at the decoder. In recent years, distributed source coders were introduced with sophisticated error correction codes, like turbo codes and LDPC codes. Although these codes performs well on noisy network communication channels, it is far from obvious that these codes can handle the non-stationary noise in the dependency channel as encountered in distributed video coders. In this paper we study the consequences of inaccurate modeling of the dependency channel on turbo and LDPC coding and show that the performance depends greatly on the choice of the probabilistic model for the dependency channel. The results show that LDPC codes are less sensitive to inaccuracies in the dependency channel models.


international conference on image processing | 2007

Analyzing Symbol and Bit Plane-Based LDPC in Distributed Video Coding

Ronald P. Westerlaken; Stefan Borchert; Rene Klein Gunnewiek; Reginald L. Lagendijk

Many distributed video coders are implemented using sophisticated error correction codes that use soft information (conditional probabilities) as a priori knowledge. This a priori information models the dependency behavior between the input data and side-information. In this paper we analyze both a symbol and bit plane-based approach using LDPC codes. We show both theoretically and experimentally that a bit plane-based encoder has the same performance as a symbol-based coder, if an appropriate dependency model is chosen. We argue that due to a significant complexity reduction a bit plane-based coder is preferable over a symbol-based approach.


international conference on image processing | 2007

Enabling Introduction of Stereoscopic (3D) Video: Formats and Compression Standards

Wilhelmus Hendrikus Alfonsus Bruls; Chris Varekamp; Rene Klein Gunnewiek; Bart Gerard Bernard Barenbrug; Amaud Bourge

After the introduction of HDTV, the next expected milestone is stereoscopic (3D) TV. This paper gives a summary of the new MPEG-C part 3 standard, capable of compressing the 2D+Z format, and shows how it can be used to serve the Is generation of 3DTVs. Furthermore it gives directions on how this standard could be extended to serve also the generations beyond.


International Journal of Cardiac Imaging | 1995

Data compression of x-ray cardio-angiographic image series.

Marcel Breeuwer; Richard Heusdens; Rene Klein Gunnewiek; Paul Zwart; Hein P. A. Haas

Medical x-ray images are increasingly stored and transmitted in a digital format. To reduce the required storage space and transmission bandwidth, data compression can be applied. In this paper we describe a new method for data compression of cardio-angiographie x-ray image series. The method is based on so-called overlappedtransform coding. A comparison with the well-known block-based transform-coding methods JPEG and MPEG is presented. We found that overlapped-transform coding does not introduce any blocking artefacts, in contrast to block-based transform coding, which introduces clearly visible blocking artefacts at compression ratios above 8. Clinical evaluations of the new method have pointed out that the image quality obtained at a compression ratio of 12 is adequate for diagnostic applications.


international conference on image processing | 2006

Dependency Channel Modeling for a LDPC-Based Wyner-Ziv Video Compression Scheme

Ronald P. Westerlaken; Stefan Borchert; Rene Klein Gunnewiek; Reginald L. Lagendijk

Research in distributed video coding for low complexity encoding has shown that without knowledge of the correlation between source and side information (i.e. the behavior of the dependency channel), the performance is substantially below that of well known state-of-the-art video coders. In a practical system the decoder needs to estimate the statistics in this dependency channel. In this paper we investigate the relation between the compression ratio and the sensitivity of the estimated channel model parameter at the decoder side. We observe that this is a hard task, but not unrealistic. We show that the tolerable parameter range is very dependent on the compression ratio and the (actual) statistics in the dependency channel.


Proceedings of SPIE | 2010

Improving depth maps with limited user input

Patrick Vandewalle; Rene Klein Gunnewiek; Chris Varekamp

A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.


advanced concepts for intelligent vision systems | 2008

Scene Reconstruction Using MRF Optimization with Image Content Adaptive Energy Functions

Ping Li; Rene Klein Gunnewiek

Multi-view scene reconstruction from multiple uncalibrated images can be solved by two stages of processing: first, a sparse reconstruction using Structure From Motion (SFM), and second, a surface reconstruction using optimization of Markov random field (MRF). This paper focuses on the second step, assuming that a set of sparse feature points have been reconstructed and the cameras have been calibrated by SFM. The multi-view surface reconstruction is formulated as an image-based multi-labeling problem solved using MRF optimization via graph cut. First, we construct a 2D triangular mesh on the reference image, based on the image segmentation results provided by an existing segmentation process. By doing this, we expect that each triangle in the mesh is well aligned with the object boundaries, and a minimum number of triangles are generated to represent the 3D surface. Second, various objective and heuristic depth cues such as the slanting cue, are combined to define the local penalty and interaction energies. Third, these local energies are adapted to the local image content, based on the results from some simple content analysis techniques. The experimental results show that the proposed method is able to well the preserve the depth discontinuity because of the image content adaptive local energies.


international conference on acoustics, speech, and signal processing | 2007

Contrast-Invariant Feature Point Correspondence

Ping Li; Dirk Farin; Rene Klein Gunnewiek

Most existing feature-matching methods utilize texture correlation for feature matching, which is usually sensitive to contrast changes. This paper proposes a new feature-point matching algorithm that does not rely on the image texture. Instead, only the smoothness assumption, which states that the displacement field in a neighborhood is coherent (smooth), is used. In the proposed method, the collected correspondences of a group of feature points within a neighborhood are efficiently determined such that the coherence measure of the displacement field in the neighborhood is maximized. The experimental results show that the proposed method is invariant to contrast changes and significantly outperforms the conventional block-matching technique.


international conference on multimedia and expo | 2006

Reuse of Motion Processing for Camera Stabilization and Video Coding

Bao Lei; Rene Klein Gunnewiek

The low bit rate of existing video encoders relies heavily on the accuracy of estimating actual motion in the input video sequence. In this paper, we propose a video stabilization and encoding (ViSE) system to achieve a higher coding efficiency through a preceding motion processing stage (to the compression), of which the stabilization part should compensate for vibrating camera motion. The improved motion prediction is obtained by differentiating between the temporal coherent motion and a more noisy motion component which is orthogonal to the coherent one. The system compensates the latter undesirable motion, so that it is eliminated prior to video encoding. To reduce the computational complexity of integrating a digital stabilization algorithm with video encoding, we propose a system that reuses the already evaluated motion vector from the stabilization stage in the compression. As compared to H.264, our system shows a 14% reduction in bit rate yet obtaining an increase of about 0.5 dB in SNR


asian conference on computer vision | 2007

Texture-independent feature-point matching (TIFM) from motion coherence

Ping Li; Dirk Farin; Rene Klein Gunnewiek

This paper proposes a novel and efficient feature-point matching algorithm for finding point correspondences between two uncalibrated images. The striking feature of the proposed algorithm is that the algorithm is based on the motion coherence/smoothness constraint only, which states that neighboring features in an image tend to move coherently. In the algorithm, the correspondences of feature points in a neighborhood are collectively determined in a way such that the smoothness of the local motion field is maximized. The smoothness constraint does not rely on any image feature, and is self-contained in the motion field. It is robust to the camera motion, scene structure, illumination, etc. This makes the proposed algorithm texture-independent and robust. Experimental results show that the proposed method outperforms existing methods for feature-point tracking in image sequences.

Collaboration


Dive into the Rene Klein Gunnewiek's collaboration.

Top Co-Authors

Avatar

Ping Li

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dirk Farin

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Marcel Breeuwer

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Reginald L. Lagendijk

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronald P. Westerlaken

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Borchert

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge