Lode Jorissen
University of Hasselt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lode Jorissen.
electronic imaging | 2016
Gauthier Lafruit; Marek Domanski; Krzysztof Wegner; Tomasz Grajek; Takanori Senoh; Joël Jung; Péter Tamás Kovács; Patrik Goorts; Lode Jorissen; Adrian Munteanu; Beerend Ceulemans; Pablo Carballeira; Sergio García; Masayuki Tanimoto
ISO/IEC MPEG and ITU-T VCEG have recently jointly issued a new multiview video compression standard, called 3D-HEVC, which reaches unpreceded compression performances for linear,dense camera arrangements. In view of supporting future highquality,auto-stereoscopic 3D displays and Free Navigation virtual/augmented reality applications with sparse, arbitrarily arranged camera setups, innovative depth estimation and virtual view synthesis techniques with global optimizations over all camera views should be developed. Preliminary studies in response to the MPEG-FTV (Free viewpoint TV) Call for Evidence suggest these targets are within reach, with at least 6% bitrate gains over 3DHEVC technology.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015
Lode Jorissen; Patrik Goorts; Sammy Rogmans; Gauthier Lafruit; Philippe Bekaert
In this paper, we propose a novel, fully automatic method to obtain accurate view synthesis for soccer games. Existing methods often make assumptions about the scene. This usually requires manual input and introduces artifacts in situations not handled by those assumptions. Our method does not make assumptions about the scene; it solely relies on feature detection and utilizes the structures visible in a 3D light field to limit the search range of traditional view synthesis methods. A visual comparison between a standard plane sweep, a depth-aware plane sweep and our method is provided, showing that our method provides more accurate results in most cases.
International Conference on Augmented and Virtual Reality | 2014
Lode Jorissen; Steven Maesen; Ashish Doshi; Philippe Bekaert
In this paper, we present a novel optical tracking approach to accurately estimate the pose of a camera in large scene augmented reality (AR). Traditionally, larger scenes are provided with multiple markers with their own identifier and coordinate system. However, when any part of a single marker is occluded, the marker cannot be identified. Our system uses a seamless structure of dots where the world position of each dot is represented by its spatial relation to neighboring dots. By using only the dots as features, our marker can be robustly identified. We use projective invariants to estimate the global position of the features and exploit temporal coherence using optical flow. With this design, our system is more robust against occlusions. It can also give the user more freedom of movement allowing them to explore objects up close and from a distance.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014
Lode Jorissen; Patrik Goorts; Bram Bex; Nick Michiels; Sammy Rogmans; Philippe Bekaert; Gauthier Lafruit
Free Viewpoint Television (FTV) is a new modality in next generation television, which provides the viewer free navigation through the scene, using image-based view synthesis from a couple of camera view inputs. The recently developed MPEG reference software technology is, however, restricted to narrow baselines and linear camera arrangements. Its reference software currently implements stereo matching and interpolation techniques, designed mainly to support three camera inputs (middle-left and middleright stereo). Especially in view of future use case scenarios in multi-scopic 3D displays, where hundreds of output views are generated from a limited number (tens) of wide baseline input views, it becomes mandatory to fully exploit all input camera information to its maximal potential. We therefore revisit existing view interpolation techniques to support dozens of camera inputs for better view synthesis performance. In particular, we show that Light Fields yield average PSNR gains of approximately 5 dB over MPEGs existing depth-based multiview video technology, even in the presence of large baselines.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2016
Lode Jorissen; Patrik Goorts; Gauthier Lafruit; Philippe Bekaert
In this paper, we propose a depth map estimation algorithm, based on Epipolar Plane Image (EPI) line extraction, that is able to correctly handle partially occluded objects in wide baseline camera setups. Furthermore, we introduce a descriptor matching technique to reduce the negative influence of inaccurate color correction and similarly textured objects on the depth maps. A visual comparison between an existing EPI-line extraction algorithm and our method is provided, showing that our method provides more accurate and consistent depth maps in most cases.
International Conference on Augmented and Virtual Reality | 2014
Nick Michiels; Lode Jorissen; Jeroen Put; Philippe Bekaert
This paper presents the augmentation of immersive omnidirectional video with realistically lit objects. Recent years have known a proliferation of real-time capturing and rendering methods of omnidirectional video. Together with these technologies, rendering devices such as Oculus Rift have increased the immersive experience of users. We demonstrate the use of structure from motion on omnidirectional video to reconstruct the trajectory of the camera. The position of the car is then linked to an appropriate \(360^{\circ }\) environment map. State-of-the-art augmented reality applications have often lacked realistic appearance and lighting. Our system is capable of evaluating the rendering equation in real-time, by using the captured omnidirectional video as a lighting environment. We demonstrate an application in which a computer generated vehicle can be controlled through an urban environment.
Ultra-High-Definition Imaging Systems | 2018
Boaz Jessie Jackin; Koki Wakunami; Lode Jorissen; Yasuyuki Ichihashi; Makoto Okui; Ryutaro Oi; Kenji Yamamoto
In this paper, we introduce hologram printing technology. This technology includes the following technologies, computer-generated hologram, hologram printer, duplication, and application-depended technologies. When this technology is applied to static hologram, the media can present static 3D objects more clearly than traditional 3D technologies such as lenticular lens and integral photography(IP) because it is based on holography. When this technology is applied to holographic optical elements(HOE), the HOE will be useful for many purposes especially for large optical elements. For example, when it is used as screen, the visual system which consists of the screen and projector can present dynamic 2D or 3D objects. Since this technology digitally designs hologram/HOE and manufactures them by wavefront printer, it is good at generating small lot of production. As a result, it is effective for the research stage of both 2D and 3D display. In addition, it is also effective for commercial stage due to simple duplication method.
Practical Holography XXXII: Displays, Materials, and Applications: SPIE OPTO | 2018
Lode Jorissen; Boaz Jessie Jackin; Koki Wakunami; Kenji Yamamoto; Gauthier Lafruit; Philippe Bekaert
A hologram of a scene can be digitally created by using a large set of images of that scene. Since capturing such a large amount is infeasible to accomplish, one may use view synthesis approaches to reduce the number of cameras and generate the missing views. We propose a view interpolation algorithm that creates views inside the scene, based on a sparse set of camera images. This allows the objects to pop out of the holographic display. We show that our approach outperforms existing view synthesis approaches and show the applicability on holographic stereograms.
international conference on computer graphics and interactive techniques | 2016
Lode Jorissen; Patrik Goorts; Gauthier Lafruit; Philippe Bekaert
In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
international conference on d imaging | 2015
Rajesh Chenchu; Lode Jorissen; Sammy Rogmans; Philippe Bekaert
In this paper, we present a system to interactively enhance the viewing experience of synchronized video streams in 3D space. Our system automatically computes the relative positions of all cameras and enables smooth transition between videos in 3D space. We are proposing a new approach to transition between videos. Other methods in viewpoint navigation demand a lot of computation for estimating depth and color information of each pixel and are not fit for wide baseline camera arrangements in real-time. Our method relies on mesh based interpolation, computes robust 3D points for all matching feature points in the subsequent camera frames and enables mesh based view transition from one camera to another similar to Microsoft Photosynth and also works on sparse feature matches. With a collection of synchronous video streams as input, our system computes the pose of each camera and the sparse geometry of the underlying scene in the subsequent camera frames during viewpoint transition.
Collaboration
Dive into the Lode Jorissen's collaboration.
National Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputs