Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chen Duo is active.

Publication


Featured researches published by Chen Duo.


Optics Express | 2017

High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction

Shujun Xing; Xinzhu Sang; Xunbo Yu; Chen Duo; Bo Pang; Xin Gao; Shenwu Yang; Yanxin Guan; Binbin Yan; Jinhui Yuan; Kuiru Wang

A high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. In traditional CGII methods, the total rendering time is long, because a large number of cameras are established in the virtual world. The ray origin and the ray direction for every pixel in elemental image array are calculated with the backward ray-tracing technique, and the total rendering time can be noticeably reduced. The method is suitable to create high quality integral image without the pseudoscopic problem. Real time and non-real time CGII rendering images and optical reconstruction are demonstrated, and the effectiveness is verified with different types of 3D object models. Real time optical reconstruction with 90 × 90 viewpoints and the frame rate above 40 fps for the CGII 3D display are realized without the pseudoscopic problem.


IEEE\/OSA Journal of Display Technology | 2016

Comparative Visual Tolerance to Vertical Disparity on 3D Projector Versus Lenticular Autostereoscopic TV

Zhang Di; Sang Xinzhu; Wang Peng; Chen Duo; Jean Louis de Bougrenet de la Tocnaye

Vertical fusion amplitude (VFA) as a reference to assess visual tolerance for vertical disparity is measured by various methods. However, it has not been comprehensively investigated on current 3D displays, although some technical display features lead to different visual impacts, such as crosstalk and viewing angle. A psychophysical measurement of the VFA on representative 3D stereoscopic and autostereoscopic displays is presented here, with considerations on certain factors affecting visual performances. For the 3D stereoscopic display, we used a 3D projector, VFA was measured under different viewing distance, stimulus size, background luminance, room lighting, target complexity and disparity velocity. Corresponding tests were carried out on a lenticular autostereoscopic TV, with additional test parameters of viewing angle, due to the presence of the lenticular. Results show that the vertical disparity tolerance is generally better on 3D projector than on autostereoscopic TV. VFA on 3D projector is significantly affected by the stimulus size, target complexity and disparity velocity, whereas crucial factors for autostereoscopic TV are stimulus size, target complexity, disparity velocity, background luminance and viewing angle. The visual performance differences indicate that technical display features should be considered when VFA is measured on different 3D devices.


International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology | 2013

Real-time arbitrary view synthesis method for ultra-HD auto-stereoscopic display

Yuanfa Cai; Xinzhu Sang; Chen Duo; Tianqi Zhao; Xin Fan; Nan Guo; Xunbo Yu; Binbin Yan

An arbitrary view synthesis method from 2D-Plus-Depth image for real-time auto-stereoscopic display is presented. Traditional methods use depth image based rendering (DIBR) technology, which is a process of synthesizing “virtual” views of a scene from still or moving images and associated per-pixel depth information. All the virtual view images are generated and then the ultimate stereo-image is synthesized. DIBR can greatly decrease the number of reference images and is flexible and efficient as the depth images are used. However it causes some problems such as the appearance of holes in the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Here, reversed disparity shift pixel rendering is used to generate the stereo-image directly, and the target image won’t generate holes. To avoid duplication of calculation and also to be able to match with any specific three-dimensional display, a selecting table is designed to pick up appropriate virtual viewpoints for auto-stereoscopic display. According to the selecting table, only sub-pixels of the appropriate virtual viewpoints are calculated, so calculation amount is independent of the number of virtual viewpoints. In addition, 3D image warping technology is used to translate depth information to parallax between virtual viewpoints and parallax, and the viewer can adjust the zero-parallax-setting-plane (ZPS) and change parallax conveniently to suit his/her personal preferences. The proposed method is implemented with OPENGL and demonstrated on a laptop computer with a 2.3 GHz Intel Core i5 CPU and NVIDA GeForce GT540m GPU. We got a frame rate 30 frames per second with 4096×2340 video. High synthesis efficiency and good stereoscopic sense can be obtained. The presented method can meet the requirements of real-time ultra-HD super multi-view auto-stereoscopic display.


Archive | 2016

Naked eye 3D augmented reality method and device based on transparent liquid crystals

Sang Xinzhu; Guo Nan; Yan Fenfen; Yuan Jinhui; Wang Peng; Chen Duo; Yu Xunbo; Wang Kuiru


Archive | 2015

Real-time drawing and comparing method for three-dimensional model

Sang Xinzhu; Xing Shujun; Yu Xunbo; Yan Fenfen; Chen Duo; Wang Peng; Li Chenyu; Yuan Jinhui; Wang Kuiru; Yu Zhongxiu


Archive | 2015

Head-mounted 3D display device

Sang Xinzhu; Chen Zhidong; Yu Xunbo; Yan Fenfen; Chen Duo; Wang Peng; Gao Xin; Yuan Jinhui; Wang Kuiru; Yu Zhongxiu


Archive | 2016

Method and device for converting 4K multi-viewpoint 3D video in real time based on GPU (Graphics Processing Unit)

Sang Xinzhu; Guo Nan; Yan Fenfen; Yuan Jinhui; Liu Zheng; Chen Duo; Xie Songlin; Yu Xunbo


Archive | 2016

Printing method and printing system based on pixel gridding dot multiplexing

Sang Xinzhu; Chen Duo; Yan Fenfen; Yu Xunbo; Chen Zhidong; Gao Xin; Wang Peng; Xie Songlin; Guo Nan; Cui Can; Yuan Jinhui; Wang Kuiru


Archive | 2016

Three-dimensional light field displaying system

Sang Xinzhu; Gao Xin; Yu Xunbo; Yan Fenfen; Chen Duo; Wang Peng; Chen Zhidong; Yuan Jinhui; Wang Kuiru; Yu Zhongxiu


Archive | 2014

Three-dimensional image compositing method

Sang Xinzhu; Cai Yuanfa; Chen Duo; Yu Xunbo; Xing Shujun

Collaboration


Dive into the Chen Duo's collaboration.

Top Co-Authors

Avatar

Sang Xinzhu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Wang Peng

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wang Kuiru

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Binbin Yan

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Xinzhu Sang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Xunbo Yu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Bo Pang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Jinhui Yuan

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Kuiru Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Researchain Logo
Decentralizing Knowledge