Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Maesen is active.

Publication


Featured researches published by Steven Maesen.


international conference on e business | 2008

Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

Maarten Dumont; Sammy Rogmans; Steven Maesen; Philippe Bekaert

We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.


virtual reality software and technology | 2013

Scalable optical tracking for navigating large virtual environments using spatially encoded markers

Steven Maesen; Patrik Goorts; Philippe Bekaert

In this paper we present a novel approach for tracking the movement of a user in a large indoor environment. Many studies show that natural walking in virtual environments increases the feeling of immersion by the users. However, most tracking systems suffer from a limited working area or are expensive to scale up to a reasonable size for navigation. Our system is designed to be easily scalable both in working area and number of simultaneous users using inexpensive off-the-shelf components. To accomplish this, the system determines the 6 DOF pose using passive LED strips, mounted to the ceiling, which are spatially encoded using De Bruijn codes. A camera mounted to the head of the user records these patterns. The camera can determine its own pose independently, so no restriction on the number of tracked objects is required. The system is accurate to a few millimeters in location and less than a degree in orientation. The accuracy of the tracker is furthermore independent of the size of the working area which makes it scalable to enormous installations. To provide a realistic feeling of immersion, the system is developed to be real-time and is only limited by the framerate of the camera, currently at 60Hz.


IEEE Computer Graphics and Applications | 2010

3DUI 2010 Contest Grand Prize Winners

Pablo Figueroa; Yoshifumi Kitamura; Sébastien Kuntz; Lode Vanacken; Steven Maesen; Tom De Weyer; Sofie Notelaers; Johanna Renny Octavia; Anastasiia Beznosyk; Karin Coninx; Felipe Bacim; Regis Kopper; Anamary Leal; Tao Ni; Doug A. Bowman

The 2010 IEEE Symposium on 3D User Interfaces ran the symposiums first 3DUI Grand Prize, a contest for innovative, practical solutions to classic 3DUI problems. The authors describe the rationale for the first contest and give an analysis of all submissions. Each categorys winners also discuss their solutions.


International Conference on Augmented and Virtual Reality | 2014

Robust Global Tracking Using a Seamless Structured Pattern of Dots

Lode Jorissen; Steven Maesen; Ashish Doshi; Philippe Bekaert

In this paper, we present a novel optical tracking approach to accurately estimate the pose of a camera in large scene augmented reality (AR). Traditionally, larger scenes are provided with multiple markers with their own identifier and coordinate system. However, when any part of a single marker is occluded, the marker cannot be identified. Our system uses a seamless structure of dots where the world position of each dot is represented by its spatial relation to neighboring dots. By using only the dots as features, our marker can be robustly identified. We use projective invariants to estimate the global position of the features and exploit temporal coherence using optical flow. With this design, our system is more robust against occlusions. It can also give the user more freedom of movement allowing them to explore objects up close and from a distance.


symposium on 3d user interfaces | 2012

HeatMeUp: A 3DUI serious game to explore collaborative wayfinding

Sofie Notelaers; Tom De Weyer; Patrik Goorts; Steven Maesen; Lode Vanacken; Karin Coninx; Philippe Bekaert

Wayfinding inside a virtual environment is a cognitive process during navigation. Normally the user inside the virtual environment has to rely on himself and different cues such as waypoints to improve his knowledge with regard to his surroundings. In this paper we will present our solution for the 3DUI Contest 2012: Heat-MeUP, a 3DUI serious game to explore collaborative alternatives, in which a partner is responsible for providing wayfinding cues. The game is set in a multi-storey building where several fires and gas leaks occur and a firefighter has to overcome several challenges, guided by a fire chief.


international conference on signal processing and multimedia applications | 2014

Real-time local stereo matching using edge sensitive adaptive windows

Maarten Dumont; Patrik Goorts; Steven Maesen; Philippe Bekaert; Gauthier Lafruit

This paper presents a novel aggregation window method for stereo matching, by combining the disparity hypothesis costs of multiple pixels in a local region more efficiently for increased hypothesis confidence. We propose two adaptive windows per pixel region, one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that rigorously follows all object edges, yielding better disparity estimations with at least 0.5 dB gain over similar methods in literature, especially around occluded areas. Also, a qualitative improvement is observed with smooth disparity maps, respecting sharp object edges. Finally, these shape-adaptive aggregation windows are represented by a single quadruple per pixel, thus supporting an efficient GPU implementation with negligible overhead.


international conference on signal processing and multimedia applications | 2014

Self-calibration of large scale camera networks

Patrik Goorts; Steven Maesen; Yunjun Liu; Maarten Dumont; Philippe Bekaert; Gauthier Lafruit

In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in sport scenes. The calibration process determines precise camera parameters, both within each camera (focal length, principal point, etc) and in between the cameras (their relative position and orientation). To this end, we first extract candidate image correspondences over adjacent cameras, without using any calibration object, solely relying on existing feature matching computer vision algorithms applied on the input video streams. We then pairwise propagate these camera feature matches over all adjacent cameras using a chained, confident-based voting mechanism and a selection relying on the general displacement across the images. Experiments show that this removes a large amount of outliers before using existing calibration toolboxes dedicated to small scale camera networks, that would otherwise fail to work properly in finding the correct camera parameters over large scale camera networks. We successfully validate our method on real soccer scenes.


articulated motion and deformable objects | 2016

Interactive Acquisition of Apparel for Garment Modeling

Fabian Di Fiore; Steven Maesen; Frank Van Reeth

In this paper we set out to find a new technical and commercial solution to easily acquire garment models. The idea is to allow the creation of new stylized versions of garments just by applying a new print design. To this end we introduce a technique for model acquisition of new apparel collection that makes use of a sparse set of guidelines in combination with an intuitive graphical user interface allowing the user to obtain and refine a 2D mesh representation of the garment. To achieve a 3D-ish look of the virtual garment we employ structured light scanning to automatically obtain a shadow map. We believe our system allows online clothes shops to bring new visual art into bespoke clothing to make apparel products more valuable compared to other garments on the market. Furthermore it helps artists and designers in virtual prototyping and visualizing garments with new print designs.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2016

Omnidirectional free viewpoint video using panoramic light fields

Steven Maesen; Patrik Goorts; Philippe Bekaert

In this paper, we describe a system to create an omnidirectional free viewpoint experience using only a small number of input cameras. The input cameras are placed on a circle and we create a large number of novel virtual viewpoints on that circle. Next, we choose a position within that circle and compute the omnidirectional image that is visible from that position by considering the collection of virtual images as a light field. The corresponding pixels in the virtual images are selected by tracing rays from the desired viewing position. Changing your position inside the circle, results in an adapted view-dependent rendering. This creates a free viewpoint 3D VR experience. We demonstrate our method using the game engine Unity combined with the Oculus Rift.


international conference on e business | 2014

Automatic Calibration of Soccer Scenes Using Feature Detection

Patrik Goorts; Steven Maesen; Yunjun Liu; Maarten Dumont; Philippe Bekaert; Gauthier Lafruit

In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in soccer scenes. The calibration process determines camera parameters, both within each camera (focal length, principal point, etc.) and inbetween the cameras (their relative position and orientation). We first extract candidate image correspondences over adjacent cameras, without using any calibration object, relying on existing feature matching methods. We then combine these pairwise camera feature matches over all adjacent cameras using a confident-based voting mechanism and a selection relying on the general displacement across the images. Experiments show that this removes a large amount of outliers before using existing calibration toolboxes dedicated to small scale camera networks, that would otherwise fail to work properly in finding the correct camera parameters over large scale camera networks. We succesfully validate our method on real soccer scenes.

Collaboration


Dive into the Steven Maesen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gauthier Lafruit

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karin Coninx

Transnational University Limburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge