Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duo Chen is active.

Publication


Featured researches published by Duo Chen.


Optics Express | 2015

Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues

Xunbo Yu; Xinzhu Sang; Xin Gao; Zhidong Chen; Duo Chen; Wei Duan; Binbin Yan; Chongxiu Yu; Daxiong Xu

A three-dimensional (3D) display with smooth motion parallax and large viewing angle is demonstrated, which is based on a microlens array and a coded two-dimensional (2D) image on a 50 inch liquid crystal device (LCD) panel with the resolution of 3840 × 2160. Combining with accurate depth cues expressing, the flipping images of the traditional integral imaging (II) are eliminated, and smooth motion parallax can be achieved. The image on the LCD panel is coded as an elemental image packed repeatedly, and the depth cue is determined by the repeated period of elemental image. To construct the 3D image with complex depth structure, the varying period of elemental image is required. Here, the detailed principle and coding method are presented. The shape and the texture of a target 3D image are designed by a structure image and an elemental image, respectively. In the experiment, two groups of structure images and their corresponding elemental images are utilized to construct a 3D scene with a football in a green net. The constructed 3D image exhibits obviously enhanced 3D perception and smooth motion parallax. The viewing angle is 60°, which is much larger than that of the traditional II.


Applied Optics | 2015

Automatic parameter estimation based on the degree of texture overlapping in accurate cost-aggregation stereo matching for three-dimensional video display.

Nan Guo; Xinzhu Sang; Duo Chen; Peng Wang; Songlin Xie; Xunbo Yu; Binbin Yan; Chongxiu Yu

Stereo matching plays a significant role in three-dimensional (3D) display applications. The estimation of the regularization parameter, which strikes a balance between the spatial distance and color difference, is essential for successfully solving ill-posed image-matching problems. Based on the cost-filtering algorithm, a degree of texture overlapping is designed to simultaneously estimate the optimal regularization parameter and achieve accurate matching results. The experimental results demonstrate that the proposed model can estimate the smoothing parameter well, and the accuracy is comparable to other methods with manual adjustment. The application of the presented stereo-matching method in the 32-view 3D display is demonstrated.


International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology | 2013

Reconstruction of 3D scenes from sequences of images

Bei Niu; Xinzhu Sang; Duo Chen; Yuanfa Cai

Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It’s a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.


international conference on multimedia and expo | 2017

A three-to-dense view conversion system based on adaptive joint view optimization

Nan Guo; Xinzhu Sang; Songlin Xie; Peng Wang; Duo Chen; Chongxiu Yu

Dense views are needed urgently for autostereoscopic three-dimensional (3D) displays. A dense view synthetization system with three input views which are captured by a three-camera rig is presented. In the system, an image preprocessing step is presented before rendering dense views, in which both color and depth images are optimized by the proposed adaptive joint view optimization algorithm. Then visual artifacts in generated views are avoided fundamentally. With the preprocessing instead of post-processing on each virtual image, the computing time is reduced apparently, especially when the number of output views is great. Experimental results show that the generated dense views are well presented on the autostereoscopic display. Both the quality of virtual images and the continuity among viewpoints are improved by the system.


Optics Express | 2016

Performance improvement of compressive light field display with the viewing-position-dependent weight distribution.

Duo Chen; Xinzhu Sang; Xunbo Yu; Xia Zeng; Songlin Xie; Nan Guo

Compressive light field display with multilayer and multiframe decompositions is able to provide three-dimensional (3D) scenes with high spatial-angular resolution and without periodically repeating view-zones. However, there are still some limitations on the display performance, such as poor image quality and limited field of view (FOV). Compressive light field display with the viewing-position-dependent weight distribution is presented. When relevant views are given high weights in the optimization, the displaying performance at the viewing-position can be noticeably improved. Simulation and experimental results demonstrate the effectiveness of the proposed method. Peak signal-noise-ration (PSNR) is improved by 7dB for the compressive light field display with narrow FOV. The angle for wide FOV can be expended to 70° × 60°, and multi-viewers are supported.


International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology | 2013

A real-time autostereoscopic display method based on partial sub-pixel by general GPU processing

Duo Chen; Xinzhu Sang; Yuanfa Cai

With the progress of 3D technology, the huge computing capacity for the real-time autostereoscopic display is required. Because of complicated sub-pixel allocating, masks providing arranged sub-pixels are fabricated to reduce real-time computation. However, the binary mask has inherent drawbacks. In order to solve these problems, weighted masks are used in displaying based on partial sub-pixel. Nevertheless, the corresponding computations will be tremendously growing and unbearable for CPU. To improve calculating speed, Graphics Processing Unit (GPU) processing with parallel computing ability is adopted. Here the principle of partial sub-pixel is presented, and the texture array of Direct3D 10 is used to increase the number of computable textures. When dealing with a HD display and multi-viewpoints, a low level GPU is still able to permit a fluent real time displaying, while the performance of high level CPU is really not acceptable. Meanwhile, after using texture array, the performance of D3D10 could be double, and sometimes be triple faster than D3D9. There are several distinguishing features for the proposed method, such as the good portability, less overhead and good stability. The GPU display system could also be used for the future Ultra HD autostereoscopic display.


Optics Express | 2017

Image quality improvement of multi-projection 3D display through tone mapping based optimization

Peng Wang; Xinzhu Sang; Yanhong Zhu; Songlin Xie; Duo Chen; Nan Guo; Chongxiu Yu

An optical 3D screen usually shows a certain diffuse reflectivity or diffuse transmission, and the multi-projection 3D display suffers from decreased display local contrast due to the crosstalk of multi-projection contents. A tone mapping based optimizing method is innovatively proposed to suppress the crosstalk and improve the display contrast by minimizing the visible contrast distortions between the display light field and a targeted one with enhanced contrast. The contrast distortions are weighted according to the visibility predicted by the model of human visual system, and the distortions are minimized for the given multi-projection 3D display model that enforces constrains on the solution. Our proposed method can adjust parallax images or parallax video contents for the optimum 3D display image quality taking into account the display characteristics and ambient illumination. The validity of the method is evaluated and proved in experiments.


AOPC 2017: Optical Storage and Display Technology | 2017

Improved depth estimation with the light field camera

Huachun Wang; Xinzhu Sang; Duo Chen; Xunbo Yu; Nan Guo; Peng Wang; Binbin Yan; Kuiru Wang; Chongxiu Yu

Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one’s viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display


AOPC 2017: Optical Storage and Display Technology | 2017

Augmented reality glass-free three-dimensional display with the stereo camera

Bo Pang; Xinzhu Sang; Shujun Xing; Kuiru Wang; Chongxiu Yu; Duo Chen; Xunbo Yu; Binbin Yan

An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.


Optical Design and Testing VII | 2016

An interactive VR system based on full-body tracking and gesture recognition

Xia Zeng; Xinzhu Sang; Duo Chen; Peng Wang; Nan Guo; Binbin Yan; Kuiru Wang

Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

Collaboration


Dive into the Duo Chen's collaboration.

Top Co-Authors

Avatar

Xinzhu Sang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Binbin Yan

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Chongxiu Yu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Xunbo Yu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Nan Guo

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Kuiru Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Peng Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Wenhua Dou

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xin Gao

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Liquan Xiao

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge