Céline Loscos
University of Reims Champagne-Ardenne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Céline Loscos.
Proceedings of SPIE | 2012
Jennifer Bonnard; Céline Loscos; Gilles Valette; Jean-Michel Nourrit; Laurent Lucas
We propose a new methodology to acquire HDR video content for autostereoscopic displays by adapting and augmenting an eight view video camera with standard sensors. To augment the intensity capacity of the sensors, we combine images taken at dierent exposures. Since the exposure has to be the same for all objectives of our camera, we x the exposure variation by applying neutral density lters on each objective. Such an approach has two advantages: several exposures are known for each video frame and we do not need to worry about synchronization. For each pixel of each view, an HDR value is computed by a weighted average function applied to the values of matching pixels from all views. The building of the pixel match list is simplied by the property of our camera which has eight aligned, equally distributed objectives. At each frame, this results in an individual HDR image for each view while only one exposition per view was taken. The nal eight HDR images are tone-mapped and interleaved for autostereoscopic display.
Proceedings of SPIE | 2012
R. Ramirez Orozco; I. Martin; Céline Loscos; P.-P. Vasquez
The limited dynamic range of digital images can be extended by composing photographs of the same scene taken with the same camera at the same view point at dierent exposure times. This is a standard procedure for static scenes but a challenging task for dynamic ones. Several methods have been presented but few recover high dynamic range within moving areas. We present a method to recover full high dynamic range (HDR) images from dynamic scenes, even in moving regions. Our method has 3 steps. Firstly, areas aected by motion are detected to generate a ghost mask. Secondly, we register dynamic objects over a reference image (the best exposed image in the input sequence). Thirdly, we combine the registered input photographs to recover HDR values in a whole image using a weighted average function. Once matching is found, the assembling step guarantees that all aligned pixels will contribute to the nal result, including dynamic content. Tests were made on more than 20 sets of sequences, with moving cars or pedestrians and dierent background. Our results show that Image Mapping Function approach detects best motion regions while Normalized Cross Correlation oers the best deal speed-accuracy for image registration. Results from our method oers better result when moving object are roughly rigid and their movement is mostly rigid. The nal composition is an HDR image with no ghosting and all dynamic content present in HDR values.
eurographics | 2014
Ludovic Blache; Céline Loscos; Olivier Nocent; Laurent Lucas
4D multiview reconstruction of moving actors has many applications in the entertainment industry and although studios providing such services become more accessible, efforts have to be done in order to improve the underlying technology to produce high-quality 4D contents. In this paper, we enable surface matching for an animated mesh sequence in order to introduce coherence in the data. The context is provided by an indoor multi-camera system which performs synchronized video captures from multiple viewpoints in a chroma key studio. Our input is given by a volumetric silhouette-based reconstruction algorithm that generates a visual hull at each frame of the video sequence. These 3D volumetric models differ from one frame to another, in terms of structure and topology, which makes them very difficult to use in post-production and 3D animation software solutions. Our goal is to transform this input sequence of independent 3D volumes into a single dynamic volumetric structure, directly usable in post-production. These volumes are then transformed into an animated mesh. Our approach is based on a motion estimation procedure. An unsigned distance function on the volumes is used as the main shape descriptor and a 3D surface matching algorithm minimizes the interference between unrelated surface regions. Experimental results, tested on our multiview datasets, show that our method outperforms approaches based on optical flow when considering robustness over several frames.
4th International Conference on 3D Body Scanning Technologies, Long Beach CA, USA, 19-20 November 2013 | 2013
Laurent Lucas; Philippe Souchet; Muhannad Ismael; Olivier Nocent; Cédric Niquin; Céline Loscos; Ludovic Blache; Stéphanie Prévost; Yannick Remion
4D multi-view reconstruction of moving actors has many applications in the entertainment industry and although studios providing such services become more accessible, efforts have to be done in order to improve the underlying technology and to produce high-quality 3D contents. The RECOVER3D project aim is to elaborate an integrated virtual video system for the broadcast and motion pictures markets. In particular, we present a hybrid acquisition system coupling mono and multiscopic video cameras where actor’s performance is captured as 4D data set: a sequence of 3D volumes over time. The visual improvement of the software solutions being implemented relies on “silhouette-based” techniques and (multi-)stereovision, following several hybridization scenarios integrating GPU-based processing. Afterwards, we transform this sequence of independent 3D volumes in a unique dynamic mesh. Our approach is based on a motion estimation procedure. An adaptive signed volume distance function is used as the principal shape descriptor and an optical flow algorithm is adapted to the surface setting with a modification that minimizes the interference between unrelated surface regions.
The Visual Computer | 2016
Ludovic Blache; Céline Loscos; Laurent Lucas
Abstract4D multi-view reconstruction of moving actors has many applications in the entertainment industry and although studios providing such services become more accessible, efforts have to be done in order to improve the underlying technology to produce high-quality 4D contents. In this paper, we present a method to derive a time-evolving surface representation from a sequence of binary volumetric data representing an arbitrary motion in order to introduce coherence in the data. The context is provided by an indoor multi-camera system which performs synchronized video captures from multiple viewpoints in a chroma-key studio. Our input is given by a volumetric silhouette-based reconstruction algorithm that generates a visual hull at each frame of the video sequence. These 3D volumetric models lack temporal coherence, in terms of structure and topology, as each frame is generated independently. This prevents an easy post-production editing with 3D animation tools. Our goal is to transform this input sequence of independent 3D volumes into a single dynamic structure, directly usable in post-production. Our approach is based on a motion estimation procedure. An unsigned distance function on the volumes is used as the main shape descriptor and a 3D surface matching algorithm minimizes the interference between unrelated surface regions. Experimental results, tested on our multi-view datasets, show that our method outperforms other approaches based on optical flow when considering robustness over several frames.
High Dynamic Range Video#R##N#From Acquisition to Display and Applications | 2016
R.R. Orozco; Céline Loscos; I. Martin; A. Artusi
Abstract The convergence between high dynamic range (HDR) images/video and stereoscopic/3D images/video is an active research field, with limitations coming from various aspects: acquisition, processing, and display. This work analyzes the latest advances in HDR video and stereo HDR video acquisition from multiple exposures in order to highlight the current progress towards a common target: 3D HDR video. The most relevant existing techniques are presented and discussed.
international symposium on visual computing | 2015
Ludovic Blache; Mathieu Desbrun; Céline Loscos; Laurent Lucas
We propose a fully automatic time-varying surface reconstruction of an actor’s performance captured from a production stage through omnidirectional video. The resulting mesh and its texture can then directly be edited in post-production. Our method makes no assumption on the costumes or accessories present in the recording. We take as input a raw sequence of volumetric static poses reconstructed from video sequences acquired in a multi-viewpoint chroma-key studio. The first frame is chosen as the reference mesh. An iterative approach is applied throughout the sequence in order to induce a deformation of the reference mesh for all input frames. At first, a pseudo-rigid transformation adjusts the pose to match the input visual hull as closely as possible. Then, local deformation is added to reconstruct fine details. We provide examples of actors’ performance inserted into virtual scenes, including dynamic interaction with the environment.
international conference on image processing | 2014
Muhannad Ismael; Stéphanie Prévost; Céline Loscos; Yannick Remion
This paper proposes a novel framework for multi-baseline stereovision exploiting the information redundancy to deal with known problems related to occluded regions. Inputs are multiple images shot or rectified in simplified geometry which induces a convenient sampling scheme of scene space: the disparity space. Instead of uniquely relying on image-space information like most multi-view stereovision methods, we work in this sampled scene space. We use fuzzy visibility reasoning and pixel neighborhood similarity measures in order to optimize fuzzy 3D discrete maps of materiality yielding precise reconstruction even in low texture and semi occluded regions. Our main contribution is to build on the disparity space to propose a new materiality map which locates the object surfaces within the actual scene.
WSCG 2018 - Short papers proceedings | 2018
Muhannad Ismael; Raissel Ramirez Orozco; Céline Loscos; Stephane Prevost; Yannick Remion
This paper proposes a novel framework to produce 3D, high-precision models of humans from multi-view capture. This method’s inputs are a visual hull and several sets of multi-baseline views. For each such view set, a surface is reconstructed with a multi-baseline stereovision method, then used to carve the visual hull. Carved visual hulls from different view sets are then fused pairwise to deliver the intended 3D model. The contributions of this paper are threefold: (i) the addition of visual hull guidance to a multi-baseline stereovision method, (ii) a carving solution to a visual hull from an interpolated and smooth stereovision surface, and (iii) a fusion solution to merge differently carved volumes differing in several areas. The paper shows that the proposed approach helps recovering a high quality carved volume, a 3D representation of the human to be modelled, that is precise even for small details and in concave areas subjected to occlusion.
conference on visual media production | 2015
Muhannad Ismael; Stéphanie Prévost; Yannick Remion; Céline Loscos; Cédric Niquin; Raissel Ramirez Orozco; Philippe Souchet
This paper proposes a novel stereovision framework for multi-view 3D reconstruction relying on inputs of both several sets of multi-baseline views and a visual hull [3]. The pipeline is illustrated in figure 1. Our Contributions of this paper are threefold: (i) improvement of our multi-baseline stereovision method [2] by VH guidance, (ii) carving VH from stereovision surface and (iii) merging differently carved volumes. Our approach builds on a previously proposed framework [2] for multi-baseline stereo-vision which provides upon the Disparity Space (DS) introduced by [5], a materiality map expressing the probability for 3D sample points to lie on a visible surface. Our acquisition system [4] composed of the cameras which are scattered around the observed scene in order to build the VH, with several groups laid as multi-scopic units dedicated to multi-baseline stereovision. Multi-scopic units are composed of aligned and evenly distributed cameras.