Wenhua Dou
National University of Defense Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wenhua Dou.
Optics Letters | 2009
Xinzhu Sang; Frank C. Fan; C. C. Jiang; Sam Choi; Wenhua Dou; Chongxiu Yu; Daxiong Xu
A large-size and full-color three-dimensional (3D) display system without the need for special eyeglasses is demonstrated. With a specially fabricated holographic functional screen with a size of 1.8x1.3 m(2), the system including optimally designed camera-projector arrays and a video server can display the fully continuous, natural 3D scene with more than 1 m image depth in real time. We explain the operating principle and present experimental results.
Optics Express | 2014
Xin Gao; Xinzhu Sang; Xunbo Yu; Peng Wang; Xuemei Cao; Lei Sun; Binbin Yan; Jinhui Yuan; Kuiru Wang; Chongxiu Yu; Wenhua Dou
The crosstalk severely affects the viewing experience for the auto-stereoscopic 3D displays based on frontal projection lenticular sheet. To suppress unclear stereo vision and ghosts are observed in marginal viewing zones(MVZs), aberration of the lenticular sheet combining with the frontal projector is analyzed and designed. Theoretical and experimental results show that increasing radius of curvature (ROC) or decreasing aperture of the lenticular sheet can suppress the aberration and reduce the crosstalk. A projector array with 20 micro-projectors is used to frontally project 20 parallax images one lenticular sheet with the ROC of 10 mm and the size of 1.9 m × 1.2 m. The 3D image with the high quality is experimentally demonstrated in both the mid-viewing zone and MVZs in the optimal viewing plane. The 3D clear depth of 1.2m can be perceived. To provide an excellent 3D image and enlarge the field of view at the same time, a novel structure of lenticular sheet is presented to reduce aberration, and the crosstalk is well suppressed.
Optical Engineering | 2011
Xinzhu Sang; Frank C. Fan; Sam Choi; Chaochuang Jiang; Chongxiu Yu; Binbin Yan; Wenhua Dou
The three-dimensional (3D) display based on the holographic functional screen is theoretically and experimentally presented. By properly designing the subholograms on the holographic functional screen, the whole 3D light field distribution can be recovered with the recorded angular spectra through the pinhole CCD camera array. The holographic functional screen is fabricated by the laser speckle method. Experimental results show that the fully continuous, natural 3D display is achieved without the limitations of conventional holography. Further improvements could lead to applications in industry, medical operation, military affairs, architecture design, mapping, and entertainment.
Holography, Diffractive Optics, and Applications VII | 2016
Qiao Meng; Xinzhu Sang; Duo Chen; Nan Guo; Binbin Yan; Chongxiu Yu; Wenhua Dou; Liquan Xiao
Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.
Holography, Diffractive Optics, and Applications VII | 2016
Peining Hou; Xinzhu Sang; Nan Guo; Duo Chen; Binbin Yan; Kuiru Wang; Wenhua Dou; Liquan Xiao
Three-dimensional (3D) displays provides valuable tools for many fields, such as scientific experiment, education, information transmission, medical imaging and physical simulation. Ground based 360° 3D display with dynamic and controllable scene can find some special applications, such as design and construction of buildings, aeronautics, military sand table and so on. It can be utilized to evaluate and visualize the dynamic scene of the battlefield, surgical operation and the 3D canvas of art. In order to achieve the ground based 3D display, the public focus plane should be parallel to the camera’s imaging planes, and optical axes should be offset to the center of public focus plane in both vertical and horizontal directions. Virtual cameras are used to display 3D dynamic scene with Unity 3D engine. Parameters of virtual cameras for capturing scene are designed and analyzed, and locations of virtual cameras are determined by the observer’s eye positions in the observing space world. An interactive dynamic 3D scene for ground based 360° 3D display is demonstrated, which provides high-immersion 3D visualization.
Holography, Diffractive Optics, and Applications VII | 2016
Jiwei Ning; Xinzhu Sang; Shujun Xing; Huilong Cui; Binbin Yan; Chongxiu Yu; Wenhua Dou; Liquan Xiao
The armys combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.
Holography, Diffractive Optics, and Applications VII | 2016
Xiaoming Hu; Xinzhu Sang; Shujun Xing; Binbin Yan; Kuiru Wang; Wenhua Dou; Liquan Xiao
Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.
Holography, Diffractive Optics, and Applications VII | 2016
Can Cui; Xinzhu Sang; Peng Wang; Duo Chen; Nan Guo; Binbin Yan; Kuiru Wang; Wenhua Dou; Liquan Xiao
Generally, there is a depth of field (DOF) constraint for each kind of auto-stereoscopic display owing to the limited angular resolution, which restricts the depth of display. Device-specific blurring will occur if the depth of object exceeds the DOF boundary. A novel depth-perception preserved three-dimensional (3D) content remapping method is presented to meet the DOF constraint of a target 3D display, by using a nonlinear global operation followed with local depth contrast recovery. Apparent depth is dominated by the distribution of depth contrast rather than an absolute depth value. The framework can be divided into two steps. Firstly, a nonlinear operation is adopted to remap the reference depth map of image to fit into the DOF limitation. Secondly, the depth contrast is recovered by decomposing the reference and remapped depth map into multi-frequency bands, calculating the difference for each band, and then the remapped depth map is used to add the scaled difference of depth map of top levels’ frequency bands. A warping-based view synthesis method is adopted to retarget the light field according to the modified depth map. The experimental results show that the modified light field is sharp while the original perception of depth is maximally preserved.
Holography, Diffractive Optics, and Applications VII | 2016
Bin Yang; Xinzhu Sang; Shujun Xing; Huilong Cui; Binbin Yan; Chongxiu Yu; Wenhua Dou; Liquan Xiao
A-Star (A*) algorithm is a heuristic directed search algorithm to evaluate the cost of moving along a particular path in the search space, which can get the shortest path. Here, path planning between any two points on the map is carried out. The STAGE tool is used to manually add way points on the map and determine their spatial location. The adjacent waypoint with a waypoint ID is connected by the line segment to form the navigation graph. A* algorithm can search the navigation graph to find the shortest path from a starting point to the destination. The A* algorithm can restart searching for path from a certain point, and the complex path can be divided in a plurality of frames. Since the navigation graph consists of the movable space, it is considered the obstacle formed by static objects in the scene, and collision detection between the character and static objects is not considered. A-star algorithm based path planning is experimentally demonstrated on a glasses-free three-dimensional display equipment, so that 3D effect of path finding can be perceived.
Holography, Diffractive Optics, and Applications VII | 2016
Huilong Cui; Xinzhu Sang; Shujun Xing; Jiwei Ning; Binbin Yan; Wenhua Dou; Liquan Xiao
A high speed synchronized rendering of multi-view video for 8K×4K multi-LCD-spliced three-dimensional (3D) display system based on CUDA is demonstrated. Because the conventional image processing calculation method is no longer applicable to this 3D display system, the CUDA technology is used for 3D image processing to address the problem of low efficiency.The 8K×4K screen is composed of four LCD screens, and accurate segmentation of the scene is carried out to ensure the correct display of 3D contents and a set of controlling and the host software are optimally implemented to make all of the connected processors render 3D videos simultaneously. The system which is based on the master-slave synchronization communication mode and DIBR-CUDA accelerated algorithm is used to realize the high resolution, high frame rate, large size, and wide view angle video rendering for the real-time 3D display. Experimental result shows a stable frame-rate at 30 frame-per-second and the friendly interactive interface can be achieved.