Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaochen Li is active.

Publication


Featured researches published by Yaochen Li.


international conference on multimedia and expo | 2011

3D facial mesh detection using geometric saliency of surface

Yaochen Li; Yuehu Liu; Yuanchun Wang; Zhengwang Wu; Yang Yang

This paper proposes a 3D facial mesh detection algorithm based on the geometric saliency of surface. Specifically, the geometric saliency of each vertex on 3D triangle mesh is measured by the combination of Gaussian-weighted curvature and spin-image correlation. Salient vertices with similar properties are clustered into regions on the saliency map, and represented as nodes by the graph model. To detect a 3D facial mesh, initialization and registration steps are applied to match each triangle in the graph model with a reference graph, corresponding to a 3D reference facial mesh. Furthermore, the match error between the graph model of the testing 3D mesh and the reference facial mesh is computed to classify face and non-face meshes. Experimental results demonstrate that the proposed algorithm is effective to detect 3D facial meshes and robust to facial expressions and geometric noises.


IEEE Transactions on Intelligent Transportation Systems | 2016

Three-Dimensional Traffic Scenes Simulation From Road Image Sequences

Yaochen Li; Yuehu Liu; Yuanqi Su; Gang Hua; Nanning Zheng

In this paper, we present a novel framework to allow users to tour simulated traffic scenes from the first-person view. Constructing 3-D scenes from road image sequences is in general difficult, due to the intrinsic complexity of dynamic road scenes, which are composed of a drastically moving background, not to mention numerous other surrounding vehicles. With the definitions of the traffic scene models, we first introduce the construction process of the simple traffic scenes. After the detection of road boundaries by a semantic fast two-cycle (FTC) level set method, we generate the control points on road sides to construct the “floor-wall” background scene that is subsequently propagated to each frame. Furthermore, we approach the cluttered traffic scenes through a three-component processing pipeline as follows: 1) traffic elements segmentation; 2) background images inpainting; and 3) traffic scenes construction. The traffic elements in the cluttered images are segmented by the semantic FTC level set method first. A Gaussian mixture model is then employed to inpaint the occluded background utilizing the optical flows. The cluttered traffic scenes can be constructed after the segmentation and inpainting components. The foreground polygons such as vehicles and traffic signs are then modeled. Users can change their viewpoints according to their own interpretations. We present the evaluations of each technical component, followed by our findings from comprehensive user studies, which well demonstrate the effectiveness of the proposed framework in delivering good touring experience to users.


Multimedia Tools and Applications | 2018

Weighted motion averaging for the registration of multi-view range scans

Rui Guo; Jihua Zhu; Yaochen Li; Dapeng Chen; Zhongyu Li; Yongqin Zhang

Multi-view registration is a fundamental but challenging task in 3D reconstruction and robot vision. Although the original motion averaging algorithm has been introduced as an effective means to solve the multi-view registration problem, it does not consider the reliability and accuracy of each relative motion. Accordingly, this paper proposes a novel motion averaging algorithm for multi-view registration. Firstly, it utilizes the pair-wise registration algorithm to estimate the relative motion and overlapping percentage of each scan pair with a certain degree of overlap. With the overlapping percentage available, it views the overlapping percentage as the corresponding weight of each scan pair and proposes the weighted motion averaging algorithm, which can pay more attention to reliable and accurate relative motions. By treating each relative motion distinctively, more accurate registration can be achieved by applying the weighted motion averaging to multi-view range scans. Experimental results demonstrate the superiority of our proposed approach compared with the state-of-the-art methods in terms of accuracy, robustness and efficiency.


Signal Processing-image Communication | 2016

Fast two-cycle curve evolution with narrow perception of background for object tracking and contour refinement

Yaochen Li; Yuanqi Su; Yuehu Liu

The problem of object contour tracking in image sequences remains challenging, especially those with cluttered backgrounds. In this paper, the fast two-cycle level set method with narrow perception of background (FTCNB) is proposed to extract the foreground objects, e.g. vehicles from road image sequences. The curve evolution of the level set method is implemented by computing the signs of region competition terms on two linked lists of contour pixels rather than by solving partial differential equations (PDEs). The curve evolution process mainly consists of two cycles: one cycle for contour pixel evolution and a second cycle for contour pixel smoothness. Based on the curve evolution process, we introduce two tracking stages for the FTCNB method. For coarse tracking stage, the speed function is defined by region competition term combining color and texture features. For contour refinement stage which requires higher tracking accuracy, the likelihood models of the Maximum a posterior (MAP) expressions are incorporated for the speed function. Both the tracking and refinement stages utilize the fast two-cycle curve evolution process with the narrow perception of background regions. With these definitions, we conduct extensive experiments and comparisons for the proposed method. The comparisons with other baseline methods well demonstrate the effectiveness of our work. Graphical abstract� HighlightsA novel level set method without solving partial differential equations.Two tracking levels proposed: object tracking and contour refinement.The speed functions concerning the feature difference between foreground and close background.Fast two-cycle curve evolution process.Extensive evaluations and comparisons with the baseline methods.


workshop on applications of computer vision | 2015

Autonomous Driving Simulation for Unmanned Vehicles

Danchen Zhao; Yuehu Liu; Chi Zhang; Yaochen Li

Human can judge drivers driving ability by observing the vehicle motion in different traffic scenes. Identically, driving behavior can be the main basis for evaluating the performance of an unmanned vehicle in both field test and simulation test. Although simulation test avoids disadvantages of field test, existing simulation systems lack traffic scene data with perception granularity of visual sensors. In order for realizing vehicle-in-loop simulation, simulation technique of driving behaviors must be able to exhibit actual motion of unmanned vehicles. In this paper, we propose an automatic approach of simulating autonomous driving behaviors of vehicles in traffic scene represented by image sequences. Different from general simulation systems, we use actual traffic environment data to build the traffic scene and simulate the driving behaviors. After the proposed method was embedded in scene browser, a typical traffic scene including the intersections was chosen for virtual vehicle to execute the driving tasks of lane change, overtaking, slowing down and stop, right turn and U-Turn. The experimental results show that different driving behaviors of vehicles in typical traffic scene can be exhibited smoothly and realistically. Our method can also be used for generating simulation data of traffic scenes that are difficult to collect.


international conference on multimedia and expo | 2015

Fast Two-Cycle level set tracking with narrow perception of background

Yaochen Li; Yuanqi Su; Yuehu Liu

The problem of tracking foreground objects in a video sequence with moving background remains challenging. In this paper, we propose the Fast Two-Cycle level set method with Narrow band Background (FTCNB) to automatically extract the foreground objects in such video sequences. The level set curve evolution process consists of two successive cycles: one cycle for data dependent term and a second cycle for smoothness regularization. The curve evolution is implemented by computing the signs of region competition terms on two linked lists of contour pixels rather than solving any Partial Differential Equations (PDEs). Maximum A Posterior (MAP) optimization is applied in the FTCNB method for curve refinement with the assistance of optical flows. The comparison with other level set methods demonstrate the tracking accuracy of our method. The tracking speed of the proposed method also outperforms the traditional level set methods.


international service availability symposium | 2011

Vertex saliency computation from 3D facial meshes

Yaochen Li; Yuehu Liu; Xiao Huang; Ming Hou; Yang Yang

This paper presents a novel method to compute vertex saliency from 3D facial meshes. Among the many descriptors for vertices, Spin-image correlation and curvature are used based on competition and cooperation mechanisms. The proposed method has been tested on the IAIR-3Dface Database for evaluating the generated salient regions on 3D facial meshes. Experimental results demonstrate that both the abundance ratio and accuracy of the salient regions can be improved by the cooperation mechanisms compared to the competition mechanisms.


international conference on image processing | 2011

Fractal image coding using SSIM

Jianji Wang; Yuehu Liu; Ping Wei; Zhiqiang Tian; Yaochen Li; Nanning Zheng


Electronics Letters | 2011

Improved fast LBG training algorithm in Hadamard domain

Z.B. Pan; G.H. Yu; Yaochen Li


international conference on intelligent transportation systems | 2014

The “Floor-Wall” Traffic Scenes Construction for Unmanned Vehicle Simulation Evaluation

Yaochen Li; Yuehu Liu; Chi Zhang; Danchen Zhao; Nanning Zheng

Collaboration


Dive into the Yaochen Li's collaboration.

Top Co-Authors

Avatar

Yuehu Liu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jihua Zhu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shanmin Pang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Yuanqi Su

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Zhongyu Li

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Nanning Zheng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Huimin Lu

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chi Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Congcong Jin

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge