Maarten Aerts
Bell Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maarten Aerts.
international conference on distributed smart cameras | 2011
Jean-François Macq; Nico Verzijp; Maarten Aerts; Frederik Vandeputte; Erwin Six
This paper describes a set-up for navigating into omnidirectional video using a tablet PC, onto which a backside camera is used for sensing the device orientation. This allows the system to automatically control the video navigation based on the device rotation, hence enabling an end-user to interact with the content in a very natural manner.
acm multimedia | 2017
Patrice Rondao Alface; Maarten Aerts; Donny Tytgat; Sammy Lievens; Christoph Stevens; Nico Verzijp; Jean-François Macq
We present an end-to-end system for streaming Cinematic Virtual Reality (VR) content (also called 360 or omnidirectional content). Content is captured and ingested at a resolution of 16K at 25Hz and streamed towards untethered mobile VR devices. Besides the usual navigation interactions such as panning and tilting offered by common VR systems, we also provide a zooming interactivity. This allows the VR client to fetch high quality pixels captured at a spatial resolution of 16K that greatly increase perceived quality compared to a 4K VR streaming solution. Since current client devices are not capable of receiving and decoding a 16K video, several optimizations are provided to only stream the required pixels for the current viewport of the user, while meeting strict latency and bandwidth requirements for a qualitative VR immersive experience.
Bell Labs Technical Journal | 2012
Maarten Aerts; Erwin Six
This paper presents a method for online tracking of a cameras orientation within a man-made scene. The technique applies to novel mobile applications where live video content from hand-held cameras requires image processing such as temporal stitching, stabilization, augmented reality, or other similar operations. The proposed method fuses relative frame-to-frame measurements from a point feature detector with absolute frame-to-scene measurements extracted from vanishing lines within the background of a man-made scene. To achieve this, we propose the use of a Kalman framework exploiting the complementarity of both visual cues in a robust way. The method assumes minimal pose change between consecutive video frames, and assumes that the scene yields sufficient straight lines in at least one of three orthogonal directions. The key insight is that using point features alone may be insufficient in situations where a foreground object moves by or if there are not enough accurate features to register. Moreover, point features provide only a relative frame-to-frame metric, which results in an accumulated error. On the other hand, using vanishing lines is insufficient as well, because it provides inaccurate information in cases where the camera is oriented along one of the three main directions. The strength and novelty of the method is in fusing both observations to overcome their shortcomings.
Proceedings of SPIE | 2016
Donny Tytgat; Maarten Aerts; Jeroen De Busser; Sammy Lievens; Patrice Rondao Alface; Jean Francois Macq
The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.
international conference on distributed smart cameras | 2011
Maarten Aerts; Erwin Six
This paper proposes a method of extrinsically calibrating cameras in a rectangular room-like environment. It searches for line segments to indicate planar surfaces in the scene, mainly floor, walls and ceiling and uses the homographical relation between matching features on those surfaces to solve for the calibration parameters of the camera and the planes. It is assumed that the environment follows the Manhattan-assumption, i.e. has orthogonal main directions, but we also propose solutions for the general case. We argue that this approach, together with its ability to help describing affine-invariant features to find a larger amount of correct matches, contributes to a larger robustness over epipolar constraint based methods. Due to its cascade-like nature, the method will yield a less accurate estimate, though it serves well as a starting point for bundle adjustment.
Archive | 2011
Maarten Aerts; Donny Tytgat; Jean-François Macq; Sammy Lievens
Archive | 2010
Donny Tytgat; Jean-François Macq; Sammy Lievens; Maarten Aerts
Archive | 2012
Donny Tytgat; Sammy Lievens; Maarten Aerts
Archive | 2010
Maarten Aerts; Donny Tytgat; Sammy Lievens
Archive | 2017
Sammy Lievens; Donny Tytgat; Maarten Aerts; Vinay Namboodiri; Erwin Six