Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xinchen Ye is active.

Publication


Featured researches published by Xinchen Ye.


IEEE Transactions on Image Processing | 2014

Color-guided depth recovery from RGB-D data using an adaptive autoregressive model

Jingyu Yang; Xinchen Ye; Kun Li; Chunping Hou; Yao Wang

This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. We analyze the stability of our method from a linear system point of view, and design a parameter adaptation scheme to achieve stable and accurate depth recovery. Quantitative and qualitative evaluation compared with ten state-of-the-art schemes show the effectiveness and superiority of our method. Being able to handle various types of depth degradations, the proposed method is versatile for mainstream depth sensors, time-of-flight camera, and Kinect, as demonstrated by experiments on real systems.


european conference on computer vision | 2012

Depth recovery using an adaptive color-guided auto-regressive model

Jingyu Yang; Xinchen Ye; Kun Li; Chunping Hou

This paper proposes an adaptive color-guided auto-regressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We formulate the depth recovery task into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. Experimental results show that our method outperforms existing state-of-the-art schemes, and is versatile for both mainstream depth sensors: ToF camera and Kinect.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Foreground–Background Separation From Video Clips via Motion-Assisted Matrix Restoration

Xinchen Ye; Jingyu Yang; Xin Sun; Kun Li; Chunping Hou; Yao Wang

Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.


IEEE Transactions on Broadcasting | 2014

Computational multi-view imaging with kinect

Xinchen Ye; Jingyu Yang; Hao Huang; Chunping Hou; Yao Wang

The lack of 3-D content has become a bottleneck for the advancement of three-dimensional television (3-DTV), but conventional multicamera arrays for multiview imaging are expensive to setup and cumbersome to use. This paper proposes a lightweight multiview imaging approach with Kinect, a handheld integrated depth-color camera, under the depth-image-based rendering framework. The proposed method consists of two components: depth restoration from noisy and incomplete depth measurements and view synthesis from depth-color pairs. In depth restoration, we propose a moving 2-D polynomial approximation via least squares to suppress quantization errors in the acquired depth values, and propose a progressive edge-guided trilateral filter to fill missing areas of the depth map. Edges extracted from color image are used to predict the locations of depth discontinuities in missing areas and to guide the proposed trilateral filter avoiding filtering across discontinuities. In view synthesis, we propose a low-rank matrix restoration model to inpaint disocclusion regions, fully exploiting the nonlocal correlations in images, and devise an efficient algorithm under the augmented lagrange multiplier (ALM) framework. Disocclusion areas are inpainted progressively from the boundaries of disocclusion with an estimated priority consisting of four terms: warping term, reliability term, texture term, and depth term. Experimental results show that our method restores high quality depth maps even for large missing areas, and synthesizes natural multiview images from restored depth maps. Strong 3-D visual experiences are observed when the synthesized multiview images are shown in two types of stereoscopic displays.


international conference on image processing | 2014

Background extraction from video sequences via motion-assisted matrix completion

Jingyu Yang; Xin Sun; Xinchen Ye; Kun Li

Background extraction from video sequences is a useful and important technique in video surveillance. This paper proposes a motion-assisted matrix completion model for background extraction from video sequences. A binary motion map is first calculated for each frame by optical flow. By excluding areas associated with moving objects with the binary motion maps, the background extraction is formulated into a motion-assisted matrix completion (MAMC) problem. Experimental results show that our method not only extracts promising backgrounds but also outperforms many state-of-the-art methods in distinguishing moving objects on challenging datasets.


visual communications and image processing | 2016

Depth refinement for binocular kinect RGB-D cameras

Jinghui Bai; Jingyu Yang; Xinchen Ye; Chunping Hou

This paper presents a novel depth refinement framework for binocular Kinect RGB-D cameras for obtaining high quality depth map. Firstly, we build a binocular depth sensing system using two Kinect v2 cameras, and analyze the systematic error of the system from two aspects, i.e., camera interaction and intrinsic characteristics. Then, the captured depth maps from different views are fused to fully exploit the inter-view correlations, and an error compensation method is proposed to remove the systematic errors from the fused depth map. Finally, an edge-guided depth propagation scheme is used to refine the depth map from binocular depth map. Experimental results show that the proposed framework is able to substantially improve the quality of depth image.


international conference on multimedia and expo | 2017

Intrinsic decomposition from a single RGB-D image with sparse and non-local priors

Yujie Wang; Kun Li; Jingyu Yang; Xinchen Ye

This paper proposes a new intrinsic image decomposition method that decomposes a single RGB-D image into reflectance and shading components. We observe and verify that, a shading image mainly contains smooth regions separated by curves, and its gradient distribution is sparse. We therefore use ℓ1-norm to model the direct irradiance component — the main sub-component extracted from shading component. Moreover, a non-local prior weighted by a bilateral kernel on a larger neighborhood is designed to fully exploit structural correlation in the reflectance component to improve the decomposition performance. The model is solved by the alternating direction method under the augmented Lagrangian multiplier (ADM-ALM) framework. Experimental results on both synthetic and real datasets demonstrate that the proposed method yields better results and enjoys lower complexity compared with two state-of-the-art methods.


pacific rim conference on multimedia | 2018

Underwater Image Enhancement Using Stacked Generative Adversarial Networks

Xinchen Ye; Hongcan Xu; Xiang Ji; Rui Xu

This paper addresses the problem of jointly haze detection and color correction from a single underwater image. We present a framework based on stacked conditional Generative adversarial networks (GAN) to learn the mapping between the underwater images and the air images in an end-to-end fashion. The proposed architecture can be divided into two components, i.e., haze detection sub-network and color correction sub-network, each with a generator and a discriminator. Specifically, a underwater image is fed into the first generator to produce a hazing detection mask. Then, the underwater image along with the predicted mask go through the second generator to correct the color of the underwater image. Experimental results show the advantages of our proposed method over several state-of-the-art methods on publicly available synthetic and real underwater datasets.


pacific rim conference on multimedia | 2018

Retinal Vessel Segmentation via Multiscaled Deep-Guidance

Rui Xu; Guiliang Jiang; Xinchen Ye; Yen-Wei Chen

Retinal vessel segmentation is a fundamental and crucial step to develop a computer-aided diagnosis (CAD) system for retinal images. Retinal vessels appear as multiscaled tubular structures that are variant in size, length, and intensity. Due to these vascular properties, it is difficult for prior works to extract tiny vessels, especially when ophthalmic diseases exist. In this paper, we propose a multiscaled deeply-guided neural network, which can fully exploit the underlying multiscaled property of retinal vessels to address this problem. Our network is based on an encoder-decoder architecture which performs deep supervision to guide the training of features in layers of different scales, meanwhile it fuses feature maps in consecutive scaled layer via skip-connections. Besides, a residual-based boundary refinement module is adopted to refine vessel boundaries. We evaluate our method on two public databases for retinal vessel segmentation. Experimental results show that our method can achieve better performance than the other five methods, including three state-of-the-art deep-learning based methods.


pacific rim conference on multimedia | 2018

Image Denoising Based on Non-parametric ADMM Algorithm.

Xinchen Ye; Mingliang Zhang; Qianyu Yan; Xin Fan; Zhongxuan Luo

Image denoising is one of the most important tasks in image processing. In this paper, we propose a new method called Non-ParaMetric Alternating Direction Method of Multiplier (ADMM) algorithm (NPM-ADMM). We utilize the standard ADMM algorithm to solve the noisy image model and update the parameters via back propagation by minimizing the loss function. In contrast to the previous methods which are required to set the parameters carefully to approach better results, the proposed method can automatically learn the related parameters without the need of manually specifying. Furthermore, the filter coefficients and the nonlinear function in the regularization term are also learned together with the parameters, rather than fixed. Experiments on image denoising demonstrate our superior results with fast convergence speed and high restoration quality.

Collaboration


Dive into the Xinchen Ye's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haojie Li

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hongcan Xu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Xu

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Xiang Ji

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiangyue Duan

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge