Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Bekaert is active.

Publication


Featured researches published by Philippe Bekaert.


Signal Processing-image Communication | 2009

Real-time stereo-based view synthesis algorithms: A unified framework and evaluation on commodity GPUs

Sammy Rogmans; Jiangbo Lu; Philippe Bekaert; Gauthier Lafruit

Novel view synthesis based on dense stereo correspondence is an active research problem. Despite that many algorithms have been proposed recently, this flourishing, cross-area research field still remains relatively less structured than its front-end constituent part, stereo correspondence. Moreover, so far little work has been done to assess different stereo-based view synthesis algorithms, particularly when real-time execution is enforced as a hard application constraint. In this paper, we first propose a unified framework that seamlessly connects stereo correspondence and view synthesis. The proposed framework dissects the typical algorithms into a common set of individual functional modules, allowing the comparison of various design decisions. Aligned with this algorithmic framework, we have developed a flexible GPU-accelerated software model, which contains optimized implementations of several recent real-time algorithms, specifically focusing on local cost aggregation and image warping modules. Based on this common software model running on graphics hardware, we evaluate the relative performance of various design combinations in terms of both view synthesis quality and real-time processing speed. This comparative evaluation leads to a number of observations, and hence offers useful guides to the future design of real-time stereo-based view synthesis algorithms.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015

Multi-camera epipolar plane image feature detection for robust view synthesis

Lode Jorissen; Patrik Goorts; Sammy Rogmans; Gauthier Lafruit; Philippe Bekaert

In this paper, we propose a novel, fully automatic method to obtain accurate view synthesis for soccer games. Existing methods often make assumptions about the scene. This usually requires manual input and introduces artifacts in situations not handled by those assumptions. Our method does not make assumptions about the scene; it solely relies on feature detection and utilizes the structures visible in a 3D light field to limit the search range of traditional view synthesis methods. A visual comparison between a standard plane sweep, a depth-aware plane sweep and our method is provided, showing that our method provides more accurate results in most cases.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009

Migrating real-time depth image-based rendering from traditional to next-gen GPGPU

Sammy Rogmans; Maarten Dumont; Gauthier Lafruit; Philippe Bekaert

This paper focuses on the current revolution in using the GPU for general-purpose computations (GPGPU), and how to maximally exploit its powerful resources. Recently, the advent of next-generation GPGPU replaced the traditional way of exploiting the graphics hardware. We have migrated real-time depth image-based rendering - for use in contemporary 3DTV technology - and noticed however that using both GPGPU paradigms leads to a higher performance than non-hybrid implementations. Using this paper, we want to sensitize other researchers to reconsider before migrating their implementation completely, and use our practical migration rules to achieve maximum performance with minimal effort.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014

A qualitative comparison of MPEG view synthesis and light field rendering

Lode Jorissen; Patrik Goorts; Bram Bex; Nick Michiels; Sammy Rogmans; Philippe Bekaert; Gauthier Lafruit

Free Viewpoint Television (FTV) is a new modality in next generation television, which provides the viewer free navigation through the scene, using image-based view synthesis from a couple of camera view inputs. The recently developed MPEG reference software technology is, however, restricted to narrow baselines and linear camera arrangements. Its reference software currently implements stereo matching and interpolation techniques, designed mainly to support three camera inputs (middle-left and middleright stereo). Especially in view of future use case scenarios in multi-scopic 3D displays, where hundreds of output views are generated from a limited number (tens) of wide baseline input views, it becomes mandatory to fully exploit all input camera information to its maximal potential. We therefore revisit existing view interpolation techniques to support dozens of camera inputs for better view synthesis performance. In particular, we show that Light Fields yield average PSNR gains of approximately 5 dB over MPEGs existing depth-based multiview video technology, even in the presence of large baselines.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2016

Multi-view wide baseline depth estimation robust to sparse input sampling

Lode Jorissen; Patrik Goorts; Gauthier Lafruit; Philippe Bekaert

In this paper, we propose a depth map estimation algorithm, based on Epipolar Plane Image (EPI) line extraction, that is able to correctly handle partially occluded objects in wide baseline camera setups. Furthermore, we introduce a descriptor matching technique to reduce the negative influence of inaccurate color correction and similarly textured objects on the depth maps. A visual comparison between an existing EPI-line extraction algorithm and our method is provided, showing that our method provides more accurate and consistent depth maps in most cases.


international conference on signal processing and multimedia applications | 2014

Real-time local stereo matching using edge sensitive adaptive windows

Maarten Dumont; Patrik Goorts; Steven Maesen; Philippe Bekaert; Gauthier Lafruit

This paper presents a novel aggregation window method for stereo matching, by combining the disparity hypothesis costs of multiple pixels in a local region more efficiently for increased hypothesis confidence. We propose two adaptive windows per pixel region, one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that rigorously follows all object edges, yielding better disparity estimations with at least 0.5 dB gain over similar methods in literature, especially around occluded areas. Also, a qualitative improvement is observed with smooth disparity maps, respecting sharp object edges. Finally, these shape-adaptive aggregation windows are represented by a single quadruple per pixel, thus supporting an efficient GPU implementation with negligible overhead.


international conference on signal processing and multimedia applications | 2014

Self-calibration of large scale camera networks

Patrik Goorts; Steven Maesen; Yunjun Liu; Maarten Dumont; Philippe Bekaert; Gauthier Lafruit

In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in sport scenes. The calibration process determines precise camera parameters, both within each camera (focal length, principal point, etc) and in between the cameras (their relative position and orientation). To this end, we first extract candidate image correspondences over adjacent cameras, without using any calibration object, solely relying on existing feature matching computer vision algorithms applied on the input video streams. We then pairwise propagate these camera feature matches over all adjacent cameras using a chained, confident-based voting mechanism and a selection relying on the general displacement across the images. Experiments show that this removes a large amount of outliers before using existing calibration toolboxes dedicated to small scale camera networks, that would otherwise fail to work properly in finding the correct camera parameters over large scale camera networks. We successfully validate our method on real soccer scenes.


Practical Holography XXXII: Displays, Materials, and Applications: SPIE OPTO | 2018

View synthesis from sparse camera array for pop-out rendering on hologram displays

Lode Jorissen; Boaz Jessie Jackin; Koki Wakunami; Kenji Yamamoto; Gauthier Lafruit; Philippe Bekaert

A hologram of a scene can be digitally created by using a large set of images of that scene. Since capturing such a large amount is infeasible to accomplish, one may use view synthesis approaches to reduce the number of cameras and generate the missing views. We propose a view interpolation algorithm that creates views inside the scene, based on a sparse set of camera images. This allows the objects to pop out of the holographic display. We show that our approach outperforms existing view synthesis approaches and show the applicability on holographic stereograms.


international conference on computer graphics and interactive techniques | 2016

Nonuniform depth distribution selection with discrete Fourier transform

Lode Jorissen; Patrik Goorts; Gauthier Lafruit; Philippe Bekaert

In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.


international conference on e business | 2014

Real-Time Edge-Sensitive Local Stereo Matching with Iterative Disparity Refinement

Maarten Dumont; Patrik Goorts; Steven Maesen; Gauthier Lafruit; Philippe Bekaert

First, we present a novel cost aggregation method for stereo matching that uses two edge-sensitive shape-adaptive support windows per pixel region; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. Second, we present a novel iterative disparity refinement process and apply it to the initially estimated disparity map. The process consists of four rigorously defined and lightweight modules that can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We demonstrate that our iterative refinement has a large effect on the overall quality, resulting in smooth disparity maps with sharp object edges, especially around occluded areas. It can be applied to any stereo matching algorithm and tends to converge to a final solution. Finally, we perform a quantitative evaluation on various Middlebury datasets, showing an increase in quality of over several dB PSNR compared with their ground truth. Our whole disparity estimation algorithm supports efficient GPU implementation to facilitate scalability and real-time performance.

Collaboration


Dive into the Philippe Bekaert's collaboration.

Top Co-Authors

Avatar

Gauthier Lafruit

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bram Bex

University of Hasselt

View shared research outputs
Top Co-Authors

Avatar

Jiangbo Lu

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge