Chris Varekamp
Philips
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Varekamp.
international conference on image processing | 2007
Wilhelmus Hendrikus Alfonsus Bruls; Chris Varekamp; Rene Klein Gunnewiek; Bart Gerard Bernard Barenbrug; Amaud Bourge
After the introduction of HDTV, the next expected milestone is stereoscopic (3D) TV. This paper gives a summary of the new MPEG-C part 3 standard, capable of compressing the 2D+Z format, and shows how it can be used to serve the Is generation of 3DTVs. Furthermore it gives directions on how this standard could be extended to serve also the generations beyond.
IEEE\/OSA Journal of Display Technology | 2012
Marc Lambooij; Karel Hinnen; Chris Varekamp
Optimizing performance of autostereoscopic lenticular displays can be achieved by altering specific interdependent design parameters, e.g., width and number of views, screen disparity and lenticular slant, resulting in different crosstalk distributions and amounts of banding and consequently, different percepts. To allow the evaluation of an autostereoscopic lenticular display, before a costly physical sample is produced, an emulator was build. This emulator consisted of a goggle-based striped polarized display, a camera-based head tracker and software for generating L/R stereo pairs in real-time as a function of head-location. This paper addresses the development of the emulator, its validation with respect to an existing physical prototype, and the perceptual evaluation of three emulated fundamental design extremes: 1) a 9-view low-cross-talk system; 2) a 9-view intermediate crosstalk system; and 3) a 17-view high crosstalk system.
visual communications and image processing | 2007
André Redert; Robert-Paul Berretty; Chris Varekamp; Bart van Geest; Jan Bruijns; Ralph Braspenning; Qingqing Wei
Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.
Proceedings of SPIE | 2013
L.P.J. Vosters; Chris Varekamp; G. de Haan
High quality 3D content generation requires high quality depth maps. In practice, depth maps generated by stereo-matching, depth sensingcameras, or decoders, have a low resolution and suffer from unreliable estimates and noise. Therefore depth post-processing is necessary. In this paper we benchmark state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Additionally, we analyze each method’s computational complexity with the big O notation and we measure the runtime of the GPU implementation that we built for each method.
Proceedings of SPIE | 2010
Patrick Vandewalle; Rene Klein Gunnewiek; Chris Varekamp
A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.
Journal of Real-time Image Processing | 2015
Lpj Luc Vosters; Chris Varekamp; Gerard De Haan
High-quality 3D content generation requires high-quality depth maps. In practice, depth maps generated by stereo-matching, depth sensing cameras, or decoders, have low resolution and suffer from unreliable estimates and noise. Therefore, depth enhancement is necessary. Depth enhancement comprises two stages: depth upsampling and temporal post-processing. In this paper, we extend our previous work on depth upsampling in two ways. First we propose PWAS-MCM, a new depth upsampling method, and we show that it achieves on average the highest depth accuracy compared to other efficient state-of-the-art depth upsampling methods. Then, we benchmark all relevant state-of-the-art filter-based temporal post-processing methods on depth accuracy by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Then we analyze the temporal post-processing methods qualitatively. Finally, we analyze the computational complexity of each depth upsampling and temporal post-processing method by measuring the throughput and hardware utilization of the GPU implementation that we built for each method.
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research | 2010
Chris Varekamp; Patrick Vandewalle
We introduce a setup for detailed and unobtrusive feet mapping in a room. Two cameras arranged in a stereo pair detect when an object touches ground by analyzing the occlusions on a fluorescent tape that is attached to the baseboard of a room. The disparity between both cameras allows localization of a persons feet and the calculation of step size and walking speed. People are separated from furniture such as chairs and tables by studying the occlusion duration. We present and discuss data-association and filtering algorithms and the algorithms that are needed for presence detection, step characterization and gaming.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009
Chris Varekamp; Patrick Vandewalle; Marc de Putter
We propose an interface for creating a depth map for a 2D picture. The image and depth map can be used for 3D display on an autostereoscopic photo frame. Our new interface does not require the user to draw on the picture or point at an object in the picture. Instead, semantic questions are asked about a given indicated position in the picture. This semantic information is then translated automatically into a depth map.
visual communications and image processing | 2003
Matthijs Carolus Piek; Ralph Braspenning; Chris Varekamp
For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is \cite{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known
international conference on d imaging | 2014
Patrick Vandewalle; Chris Varekamp
K