Edouard Lamboray
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edouard Lamboray.
international conference on computer graphics and interactive techniques | 2003
Markus H. Gross; Stephan Würmlin; Martin Naef; Edouard Lamboray; Christian P. Spagno; Andreas Kunz; Esther Koller-Meier; Tomáš Svoboda; Luc Van Gool; Silke Lang; Kai Strehlke; Andrew Vande Moere; Oliver G. Staadt
In this paper, we report ongoing work in a new project, The Blue-C. The goal of this project is to build a collaborative, immersive virtual environment which will eventually integrate real humans, captured by a set of video cameras. Two Blue-C.s will be interconnected via a high-speed network. This will allow for bi-directional collaboration and interaction between two persons sharing virtual spaces. The video streams are used for both texture and geometry extraction. We will generate a 3-D light field inlay enriched with the reconstructed geometry, which will be integrated into the virtual environment. The design and construction of the Blue-C. environment, including both hardware and software, is an interdisciplinary effort with participants from the departments of computer science, architecture, product development, and electrical engineering. Parallel to the development of the core system, we are designing new applications in the areas of computer aided architectural design, product reviewing, and medicine, which will highlight the versatility of the Blue-C.
Computers & Graphics | 2004
Stephan Würmlin; Edouard Lamboray; Markus H. Gross
Abstract We present 3D video fragments, a dynamic point sample framework for real-time free-viewpoint video. By generalizing 2D video pixels towards 3D irregular point samples we combine the simplicity of conventional 2D video processing with the power of more complex polygonal representations for free-viewpoint video. We propose a differential update scheme exploiting the spatio-temporal coherence of the video streams of multiple cameras. Updates are issued by operators such as inserts and deletes accounting for changes in the input video images. The operators from multiple cameras are processed, merged into a 3D video stream and transmitted to a remote site. We also introduce a novel concept for camera control which dynamically selects the set of relevant cameras for reconstruction. Moreover, it adapts to the processing load and rendering platform. Our framework is generic in the sense that it works with any real-time 3D reconstruction method which extracts depth from images. The video renderer displays free-viewpoint videos using an efficient point-based splatting scheme and makes use of state-of-the-art vertex and pixel processing hardware for real-time visual processing.
pacific conference on computer graphics and applications | 2002
Stephan Würmlin; Edouard Lamboray; Oliver G. Staadt; Markus H. Gross
We present the 3D video recorder, a system capable of recording, processing, and playing three-dimensional video from multiple points of view. We first record 2D video streams from several synchronized digital video cameras and store pre-processed images to disk. An off-line processing stage converts these images into a time-varying three-dimensional hierarchical point-based data structure and stores this 3D video to disk. We show how we can trade-off 3D video quality with processing performance and devise efficient compression and coding schemes for our novel 3D video representation. A typical sequence is encoded at less than 7 megabits per second at a frame rate of 8.5 frames per second. The 3D video player decodes and renders 3D videos from hard-disk in real-time, providing interaction features known from common video cassette recorders, like variable-speed forward and reverse, and slow motion. 3D video playback can be enhanced with novel 3D video effects such as freeze-and-rotate and arbitrary scaling. The player builds upon point-based rendering techniques and is thus capable of rendering high-quality images in real-time. Finally, we demonstrate the 3D video recorder on multiple real-life video sequences.
ieee virtual reality conference | 2003
Martin Naef; Edouard Lamboray; Oliver G. Staadt; Markus H. Gross
We present a distributed scene graph architecture for use in the blue-c, a novel collaborative immersive virtual environment. We extend the widely used OpenGL Performer toolkit to provide a distributed scene graph maintaining full synchronization down to vertex and texel level. We propose a synchronization scheme including customizable, relaxed locking mechanisms. We demonstrate the functionality of our toolkit with two prototype applications in our high-performance virtual reality and visual simulation environment.
eurographics | 2004
Michael Waschbüsch; Markus H. Gross; Felix Eberhard; Edouard Lamboray; Stephan Würmlin
decomposition of the point set and thus easily allows for progressive decoding. Our method is generic in the sense that it can handle arbitrary point attributes using attribute-specific coding operations. Furthermore, no resampling of the model is needed and thus we do not introduce additional smoothing artifacts. We provide coding operators for the point position, normal and color. Particularly, by transforming the point positions into a local reference frame, we exploit the fact that all point samples are living on a surface. Our framework enables for compressing both geometry and appearance of the model in a unified manner. We show the performance of our framework on a diversity of point-based models.
Computer Graphics Forum | 2003
Stephan Würmlin; Edouard Lamboray; Oliver G. Staadt; Markus H. Gross
We present the 3D Video Recorder, a system capable of recording, processing, and playing three‐dimensional video from multiple points of view. We first record 2D video streams from several synchronized digital video cameras and store pre‐processed images to disk. An off‐line processing stage converts these images into a time‐varying 3D hierarchical point‐based data structure and stores this 3D video to disk. We show how we can trade‐off 3D video quality with processing performance and devise efficient compression and coding schemes for our novel 3D video representation. A typical sequence is encoded at less than 7 Mbps at a frame rate of 8.5 frames per second. The 3D video player decodes and renders 3D videos from hard‐disk in real‐time, providing interaction features known from common video cassette recorders, like variable‐speed forward and reverse, and slow motion. 3D video playback can be enhanced with novel 3D video effects such as freeze‐and‐rotate and arbitrary scaling. The player builds upon point‐based rendering techniques and is thus capable of rendering high‐quality images in real‐time. Finally, we demonstrate the 3D Video Recorder on multiple real‐life video sequences.
IEEE Transactions on Visualization and Computer Graphics | 2005
Edouard Lamboray; Stephan Würmlin; Markus H. Gross
In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system - the blue-c - we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.
ieee virtual reality conference | 2004
Edouard Lamboray; Stephan Würmlin; Markus H. Gross
Free-viewpoint video is a promising technology for next-generation virtual and augmented reality applications. Our goal is to enhance collaborative VR applications with 3D video-conferencing features. In this paper, we propose a 3D video streaming technique which can be deployed in telepresence environments. The streaming characteristics of real-time 3D video sequences are investigated under various system and networking conditions. We introduce several encoding techniques and analyze their behavior with respect to resolution, bandwidth and inter-frame jitter. Our 3D video pipeline uses point samples as basic primitives and is fully integrated with a communication framework handling acknowledgment information for reliable network transmissions and application control data. The 3D video reconstruction process dynamically adapts to processing and networking bottlenecks. Our results show that a reliable transmission of our pixel-based differential prediction encoding leads to the best performance in terms of bandwidth, but is also quite sensitive to packet losses. A redundantly encoded stream achieves better results in presence of burst losses and seamlessly adapts to varying network throughput.
international conference on image processing | 2004
Edouard Lamboray; Stephan Würmlin; M. Waschnusch; Markus H. Gross; Hanspeter Pfister
In this paper, we present a coding framework addressing image-space compression for free-viewpoint video. Our framework is based on time-varying 3D point samples which represent real-world objects. The 3D point samples are obtained after a geometrical reconstruction from multiple pre-recorded video sequences and thus allow for arbitrary viewpoints during playback. The encoding of the data is performed as an off-line process and is not time-critical. The decoding however, must support for real-time rendering of the dynamic 3D data. We introduce a compression framework which encodes multiple point attributes like depth and color into progressive streams. The reference data structure is aligned on the original camera input images and thus enables for easy view-dependent decoding. A novel differential coding approach permits random access in constant time throughout the entire data set and thus enables arbitrary viewpoint trajectories in both time and space.
Computers & Graphics | 2003
Edouard Lamboray; Aaron Zollinger; Oliver G. Staadt; Markus H. Gross
Abstract Distributed multimedia applications typically handle two different types of communication: request/reply interaction for control information as well as real-time streaming data. The CORBA Audio/Video Streaming Service provides a promising framework for the efficient development of such applications. In this paper, we discuss the CORBA-based design and implementation of Campus TV, a distributed television studio architecture. We analyze the performance of our test application with respect to different configurations. We especially investigate interaction delays, i.e., the latencies that occur between issuing a CORBA request and receiving the first video frame corresponding to the new mode. Our analysis confirms that the interaction delay can be reasonably bounded for UDP and RTP. In order to provide results which are independent from coding schemes, we do not take into account any media specific compression issues. Hence, our results help to make essential design decisions while developing interactive multimedia applications in general, involving e.g., distributed synthetic image data, or augmented and virtual reality.