Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maarten Dumont is active.

Publication


Featured researches published by Maarten Dumont.


international conference on e business | 2008

Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

Maarten Dumont; Sammy Rogmans; Steven Maesen; Philippe Bekaert

We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.


international conference on d imaging | 2012

An end-to-end system for free viewpoint video for smooth camera transitions

Patrik Goorts; Maarten Dumont; Sammy Rogmans; Philippe Bekaert

In this paper, we present an end-to-end system for free viewpoint video for smooth camera transitions in sport scenes. Our system consists of a network of static computer vision cameras, a storage infrastructure and an interpolation rendering module, connected with a 10 Gigabit Ethernet network. The user of the system requests a viewpath for the virtual camera and the rendering module then generates the images using a depth-aware plane sweep approach. First, the foreground and background are separated and rendered independently. The foreground is rendered using a plane-sweep approach and the obtained depth map is split up in groups of players. Each group is assigned a global depth, which is used in a second plane sweep to restrict the depth. This will reduce artifacts, such as extra limbs and ghost players. The algorithm is demonstrated on actual soccer recordings. The system is fully automatic and can work in near real-time, thus providing virtual images of high quality in a fast manner.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009

Migrating real-time depth image-based rendering from traditional to next-gen GPGPU

Sammy Rogmans; Maarten Dumont; Gauthier Lafruit; Philippe Bekaert

This paper focuses on the current revolution in using the GPU for general-purpose computations (GPGPU), and how to maximally exploit its powerful resources. Recently, the advent of next-generation GPGPU replaced the traditional way of exploiting the graphics hardware. We have migrated real-time depth image-based rendering - for use in contemporary 3DTV technology - and noticed however that using both GPGPU paradigms leads to a higher performance than non-hybrid implementations. Using this paper, we want to sensitize other researchers to reconsider before migrating their implementation completely, and use our practical migration rules to achieve maximum performance with minimal effort.


canadian conference on computer and robot vision | 2007

Extrinsic Recalibration in Camera Networks

Chris Hermans; Maarten Dumont; Philippe Bekaert

This work addresses the practical problem of keeping a camera network calibrated during a recording session. When dealing with real-time applications, a robust calibration of the camera network needs to be assured, without the burden of a full system recalibration at every (un)intended camera displacement. In this paper we present an efficient algorithm to detect when the extrinsic parameters of a camera are no longer valid, and reintegrate the displaced camera into the previously calibrated camera network. When the intrinsic parameters of the cameras are known, the algorithm can also be used to build ad-hoc distributed camera networks, starting from three calibrated cameras. Recalibration is done using pairs of essential matrices, based on image point correspondences. Unlike other approaches, we do not explicitly compute any 3D structure for our calibration purposes.


international conference on signal processing and multimedia applications | 2014

Real-time local stereo matching using edge sensitive adaptive windows

Maarten Dumont; Patrik Goorts; Steven Maesen; Philippe Bekaert; Gauthier Lafruit

This paper presents a novel aggregation window method for stereo matching, by combining the disparity hypothesis costs of multiple pixels in a local region more efficiently for increased hypothesis confidence. We propose two adaptive windows per pixel region, one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that rigorously follows all object edges, yielding better disparity estimations with at least 0.5 dB gain over similar methods in literature, especially around occluded areas. Also, a qualitative improvement is observed with smooth disparity maps, respecting sharp object edges. Finally, these shape-adaptive aggregation windows are represented by a single quadruple per pixel, thus supporting an efficient GPU implementation with negligible overhead.


international conference on signal processing and multimedia applications | 2014

Self-calibration of large scale camera networks

Patrik Goorts; Steven Maesen; Yunjun Liu; Maarten Dumont; Philippe Bekaert; Gauthier Lafruit

In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in sport scenes. The calibration process determines precise camera parameters, both within each camera (focal length, principal point, etc) and in between the cameras (their relative position and orientation). To this end, we first extract candidate image correspondences over adjacent cameras, without using any calibration object, solely relying on existing feature matching computer vision algorithms applied on the input video streams. We then pairwise propagate these camera feature matches over all adjacent cameras using a chained, confident-based voting mechanism and a selection relying on the general displacement across the images. Experiments show that this removes a large amount of outliers before using existing calibration toolboxes dedicated to small scale camera networks, that would otherwise fail to work properly in finding the correct camera parameters over large scale camera networks. We successfully validate our method on real soccer scenes.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2010

Biological-aware stereoscopic rendering in free viewpoint technology using GPU computing

Sammy Rogmans; Maarten Dumont; Gauthier Lafruit; Philippe Bekaert

In this paper we present a biological-aware stereoscopic renderer that is used in a video communication system, to convincingly provide the participants with synthetic 3D perception. As opposed to conventional 3D systems — where pre-recorded content is presented to the viewer without taking his or her viewing location into account — we adaptively exploit both monocular and binocular cues of the human vision system, based on the viewing location. By using a GPU-based control loop, we are able to provide real-time synthetic 3D perception that is experienced as being rich and natural, without loosing any visual comfort whatsoever.


international conference on e business | 2014

Automatic Calibration of Soccer Scenes Using Feature Detection

Patrik Goorts; Steven Maesen; Yunjun Liu; Maarten Dumont; Philippe Bekaert; Gauthier Lafruit

In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in soccer scenes. The calibration process determines camera parameters, both within each camera (focal length, principal point, etc.) and inbetween the cameras (their relative position and orientation). We first extract candidate image correspondences over adjacent cameras, without using any calibration object, relying on existing feature matching methods. We then combine these pairwise camera feature matches over all adjacent cameras using a confident-based voting mechanism and a selection relying on the general displacement across the images. Experiments show that this removes a large amount of outliers before using existing calibration toolboxes dedicated to small scale camera networks, that would otherwise fail to work properly in finding the correct camera parameters over large scale camera networks. We succesfully validate our method on real soccer scenes.


international conference on e business | 2014

Real-Time Edge-Sensitive Local Stereo Matching with Iterative Disparity Refinement

Maarten Dumont; Patrik Goorts; Steven Maesen; Gauthier Lafruit; Philippe Bekaert

First, we present a novel cost aggregation method for stereo matching that uses two edge-sensitive shape-adaptive support windows per pixel region; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. Second, we present a novel iterative disparity refinement process and apply it to the initially estimated disparity map. The process consists of four rigorously defined and lightweight modules that can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We demonstrate that our iterative refinement has a large effect on the overall quality, resulting in smooth disparity maps with sharp object edges, especially around occluded areas. It can be applied to any stereo matching algorithm and tends to converge to a final solution. Finally, we perform a quantitative evaluation on various Middlebury datasets, showing an increase in quality of over several dB PSNR compared with their ground truth. Our whole disparity estimation algorithm supports efficient GPU implementation to facilitate scalability and real-time performance.


international conference on d imaging | 2014

Iterative refinement for real-time local stereo matching

Maarten Dumont; Patrik Goorts; Steven Maesen; Donald Degraen; Philippe Bekaert; Gauthier Lafruit

We present a novel iterative refinement process to apply to any stereo matching algorithm. The quality of its disparity map output is increased using four rigorously defined refinement modules, which can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We apply our refinement process to our recently developed aggregation window method for stereo matching that combines two adaptive windows per pixel region [2]; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. We demonstrate that the iterative disparity refinement has a large effect on the overall quality, especially around occluded areas, and tends to converge to a final solution. We perform a quantitative evaluation on various Middlebury datasets. Our whole disparity estimation process supports efficient GPU implementation to facilitate scalability and real-time performance.

Collaboration


Dive into the Maarten Dumont's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gauthier Lafruit

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge