Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xunbo Yu is active.

Publication


Featured researches published by Xunbo Yu.


Optics Express | 2017

High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction

Shujun Xing; Xinzhu Sang; Xunbo Yu; Chen Duo; Bo Pang; Xin Gao; Shenwu Yang; Yanxin Guan; Binbin Yan; Jinhui Yuan; Kuiru Wang

A high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. In traditional CGII methods, the total rendering time is long, because a large number of cameras are established in the virtual world. The ray origin and the ray direction for every pixel in elemental image array are calculated with the backward ray-tracing technique, and the total rendering time can be noticeably reduced. The method is suitable to create high quality integral image without the pseudoscopic problem. Real time and non-real time CGII rendering images and optical reconstruction are demonstrated, and the effectiveness is verified with different types of 3D object models. Real time optical reconstruction with 90 × 90 viewpoints and the frame rate above 40 fps for the CGII 3D display are realized without the pseudoscopic problem.


Applied Optics | 2015

Automatic parameter estimation based on the degree of texture overlapping in accurate cost-aggregation stereo matching for three-dimensional video display.

Nan Guo; Xinzhu Sang; Duo Chen; Peng Wang; Songlin Xie; Xunbo Yu; Binbin Yan; Chongxiu Yu

Stereo matching plays a significant role in three-dimensional (3D) display applications. The estimation of the regularization parameter, which strikes a balance between the spatial distance and color difference, is essential for successfully solving ill-posed image-matching problems. Based on the cost-filtering algorithm, a degree of texture overlapping is designed to simultaneously estimate the optimal regularization parameter and achieve accurate matching results. The experimental results demonstrate that the proposed model can estimate the smoothing parameter well, and the accuracy is comparable to other methods with manual adjustment. The application of the presented stereo-matching method in the 32-view 3D display is demonstrated.


AOPC 2017: Optical Storage and Display Technology | 2017

Super multi-view three-dimensional display with small light intensity ripple and high spatial resolution

Li Liu; Xinzhu Sang; Xunbo Yu; Boyang Liu; Binbin Yan; Kuiru Wang; Chongxiu Yu

A lenticular-type super multi-view (SMV) display method with narrow structure pitch and small light intensity ripple is presented. Normally, increasing the number of viewing zones can reduce the amplitude of the light intensity ripple. The viewing zones’ number is proportional to the structure pitch of the lenticular lens array. However, wide structure pitch will decrease the spatial display resolution. Here, a lenticular lens array with one pitch covering 2.2 sub-pixels and novel arrangement of left and right sub-pixels groups are designed. The proposed display method can provide twenty-two viewing zones. By the introduction of tracking device, both binocular parallax and motion parallax are experienced. By measuring with the photometer, the light intensity ripple is 0.7%, which is far smaller than the traditional SMV display with 8 viewing zones. As the fluctuation of light intensity is reduced, the 3D perception effect is improved.


Optics Express | 2016

Performance improvement of compressive light field display with the viewing-position-dependent weight distribution.

Duo Chen; Xinzhu Sang; Xunbo Yu; Xia Zeng; Songlin Xie; Nan Guo

Compressive light field display with multilayer and multiframe decompositions is able to provide three-dimensional (3D) scenes with high spatial-angular resolution and without periodically repeating view-zones. However, there are still some limitations on the display performance, such as poor image quality and limited field of view (FOV). Compressive light field display with the viewing-position-dependent weight distribution is presented. When relevant views are given high weights in the optimization, the displaying performance at the viewing-position can be noticeably improved. Simulation and experimental results demonstrate the effectiveness of the proposed method. Peak signal-noise-ration (PSNR) is improved by 7dB for the compressive light field display with narrow FOV. The angle for wide FOV can be expended to 70° × 60°, and multi-viewers are supported.


AOPC 2017: Optical Storage and Display Technology | 2017

Improved depth estimation with the light field camera

Huachun Wang; Xinzhu Sang; Duo Chen; Xunbo Yu; Nan Guo; Peng Wang; Binbin Yan; Kuiru Wang; Chongxiu Yu

Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one’s viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display


AOPC 2017: Optical Storage and Display Technology | 2017

Augmented reality glass-free three-dimensional display with the stereo camera

Bo Pang; Xinzhu Sang; Shujun Xing; Kuiru Wang; Chongxiu Yu; Duo Chen; Xunbo Yu; Binbin Yan

An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.


AOPC 2017: Optical Storage and Display Technology | 2017

Demonstration of arbitrary views based on autostereoscopic three-dimensional display system

Boyang Liu; Xinzhu Sang; Xunbo Yu; Liu Li; Binbin Yan; Kuiru Wang; Chongxiu Yu; Le Yang

A method to realize arbitrary views for the lenticular lens array based on autostereoscopic three-dimensional display system is demonstrated. Normally, the number of views is proportional to pitch of the lenticular lens array. Increasing the number of views will result in reducing resolution and enhancing of granular sensation. 32 dense views can be achieved with one lenticular lens pitch covering 5.333 sub-pixels, which does significantly increases the number of views without affecting the resolution. But the structure of pitch and the number of views are fixed. Here, the 3D display method that the number of views can be changed artificially for most structures of lenticular lens is presented. Compared with the previous 32 views display method, the smoothness of motion parallex and the display depth of field are significantly improved.


Journal of Electronic Imaging | 2014

Visible region of interest extraction method for three-dimensional video coding

Mengqing Zheng; Xinzhu Sang; Songlin Xie; Xunbo Yu; Cong Zhou

Abstract. A visible region of interest (ROI) extraction method based on region expansion is proposed. It is a method that is combined with a new reverse realization of calibration based on homography. The proposed method extracts the ROI automatically and successfully and simultaneously calibrates the distortion caused by the stereoscopic camera system. It is a practical and easy method because it does not require any extra processing before display. It can be used to discard the confusing margin of the synthesized image of several parallax images and reduce the data amount needed to be processed and encoded. Here, real-time capturing, transmitting, and three-dimensional displaying systems are demonstrated to verify its effectiveness and feasibility.


International Symposium on Optoelectronic Technology and Application 2014: Image Processing and Pattern Recognition | 2014

An adaptive point tracking method based on depth map for 2D-3D video conversion

Yangdong Liu; Xinzhu Sang; Tianqi Zhao; Xunbo Yu; Guozhong Shi; Jing Liu

The three-dimensional (3D) display technology has made a great progress in the last several decades, which provides a dramatic improvement in visual experiences. The availability of 3D content is a critical factor limiting wide applications of 3D technology. An adaptive point tracking method based on the depth map is demonstrated, which is used to automatically generate depth maps elaborately. Point tracking method used in the previous investigation is template matching and it can’t track points precisely. An adaptive point tracking method with adaptive window and weights based on the discontinuous edge information and texture complexity of the depth map is used. In the experiment, a method to automatically generate the depth maps using trace points between adjacent images is realized. Theoretical analysis and experimental results show that the presented method can track feature points precisely, and the depth maps of non-key images are perfectly generated.


International Symposium on Optoelectronic Technology and Application 2014: Image Processing and Pattern Recognition | 2014

Visual fatigue modeling for stereoscopic video shot based on camera motion

Guozhong Shi; Xinzhu Sang; Xunbo Yu; Yangdong Liu; Jing Liu

As three–dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

Collaboration


Dive into the Xunbo Yu's collaboration.

Top Co-Authors

Avatar

Xinzhu Sang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Binbin Yan

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Chongxiu Yu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Xin Gao

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Duo Chen

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Kuiru Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Peng Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Boyang Liu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Shenwu Yang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Shujun Xing

Beijing University of Posts and Telecommunications

View shared research outputs
Researchain Logo
Decentralizing Knowledge