Zhong Zhou
Beihang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhong Zhou.
ieee virtual reality conference | 2015
Zhong Zhou; Tao Yu; Xiaofeng Qiu; Ruigang Yang; Qinping Zhao
We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.
international conference on virtual reality and visualization | 2012
Jian Hu; Tao Yu; Lin Wang; Zhong Zhou; Wei Wu
This paper presents a method to represent the complicated illumination in the real world by using HDR light probe sequences. The illumination representations proposed in this paper employ non-uniform structure instead of uniform light field to simulate lighting with spatial and angular variation, which turns out to be more efficient and accurate. The captured illuminations are divided into direct and indirect parts that are modeled respectively. Both integrated with global illumination algorithm easily, the direct part is organized as an amount of clusters on a virtual plane, which can solve the lighting occlusion problem successfully, while the indirect part is represented as a bounding mesh with HDR texture. This paper demonstrates the technique that captures real illuminations for virtual scenes, and also shows the comparison with the renderings using traditional image based lighting.
pacific rim conference on multimedia | 2018
Ming Meng; Yi Zhou; Chong Tan; Zhong Zhou
Augmented Virtual Environment (AVE) fuses real-time video streaming with virtual scenes to provide a new capability of the real-world run-time perception. Although this technique has been developed for many years, it still suffers from the fusion correctness, complexity and the image distortion during flying. The image distortion could be commonly found in an AVE system, which is decided by the viewpoint of the environment. Existing work lacks of the evaluation of the viewpoint quality, and then failed to optimize the fly path for AVE. In this paper, we propose a novel method of viewpoint quality evaluation (VQE), taking texture distortion as evaluation metric. The texture stretch and object fragment are taken as the main factors of distortion. We visually compare our method with viewpoint entropy on campus scene, demonstrating that our method is superior in reflecting distortion degree. Furthermore, we conduct a user study, revealing that our method is suitable for the good quality demonstration with viewpoint control for AVE.
pacific rim conference on multimedia | 2017
Chang Xing; Sichen Bai; Yi Zhou; Zhong Zhou; Wei Wu
In multiple camera networks, the correlation of multiple cameras can provide us with a richer information than a single camera. In order to make full use of the association information between multiple cameras. We propose a novel approach to estimate a camera topology relationship in a multi-camera surveillance network, which is unsupervised and gradually refined from coarse to fine. First, an improved cross-correlation function is used to get a preliminary result, then a time constraint feature matching model is used to reduce the error caused by external environment and noise, which can increase the accuracy of our results. Finally, we test the proposed method on several different datasets, and its result indicates that our approach perform well on recovering the topology of the camera and can improve the accuracy on over camera tracking.
ieee virtual reality conference | 2017
Zhong Zhou; Zhiyi Bian; Zheng Zhuo
A novel prototype of MR (Mixed Reality) Sand Table is presented in this paper, that fuses multiple real-time video streaming into a physically united view. The main processes include geometric calibration and alignment, image blending and the final projection. Firstly we proposed a two-step MR alignment scheme which estimates the transform matrix between input video streaming and the sand table for coarse alignment, and deforms the input frame using moving least squares for accurate alignment. To overcome the video border distinction problem, we make a border-adaptive image stitching with brightness diffusion to merge the overlapping area. With the projection, the video area can be mixed into the sand table in real-time to provide a live physical mixed reality model. We build a prototype to demonstrate the effectiveness of the proposed method. This design could also be easily extended to large size with help of multiple projectors. The system proposed in this paper supports multiple user interaction in a broad area of applications such as surveillance, demonstration, action preview and discussion assistances.
international conference on virtual reality and visualization | 2015
Yanan Feng; Tao Yu; Wei Wu; Zhong Zhou
Compositing is one of the most critical techniques in various aspects such as movie production and computer graphics. When images have complex textures, existing color based methods require exhaustive training samples to achieve plausible compositing results. In this paper, we propose a scene-adaptive color transfer model with an application for image compositing. We extract foregrounds from the source and target images. Then, we divide the source and target images into unequal bands according to the luminance. After that, color transfer is conducted by dynamically adjusting the weights of luminance and chrominance. To achieve a realistic composite, we introduce an adaptive source compositing region selection method and address the boundary transition by a discrete Poisson solver. The experiment results illustrate that our method achieves a faithful color transfer. In addition, our composite results appear highly realistic.
international conference on virtual reality and visualization | 2014
Yi Zhou; Peifu Liu; Jingdi You; Zhong Zhou
Location-based panorama systems such as Google Street View let users explore places around the world through panoramic bubbles or strips. The panorama image is easy to be deployed, but it can only provide the static views of capturing time and lacks developing process. In this paper, we present an augmented virtual environment system that combines multiple location-based panorama videos with the structural context of scenes. The raw panorama images are from several independent video cameras. A frame synchronization method of video streams is proposed to provide the temporal consistency in the panorama stitching. Our novel method augments the virtual environment through mixing it with the panorama videos. To the best of our knowledge, this is the first paper to fuse panorama videos with virtual environments. The system is demonstrated in a campus-wide area, and it enhances users walk-through experiences in the experiment environment.
Optical Engineering | 2013
Tao Yu; Zhong Zhou; Jian Hu; Qinping Zhao
Abstract. Existing light field-based lighting follows the representation on plane, such as the commonly used light slab. A novel light field representation on scene surface is proposed. The illumination is captured by a panorama camera on translation stage with a nonuniform sampling and calibration strategy. Sampled light rays are reprojected on the reconstructed scene to generate the scene surface light field representations for both direct light source and indirect light. This representation is more consistent with the actual emission and reflection of light transport than the existing ones. An accurate light source estimation and a practical indirect light resampling approach are presented based on the representation. The rendering results illustrate the realistic lighting effect, especially in spatially varying illumination.
Archive | 2011
Qinping Zhao; Lin Wang; Chaohui Wu; Zhong Zhou; Wei Wu
Archive | 2012
Wei Wu; Chaohui Wu; Zhong Zhou; Qinping Zhao