Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yen-Lun Chen is active.

Publication


Featured researches published by Yen-Lun Chen.


Neurocomputing | 2012

An energy model approach to people counting for abnormal crowd behavior detection

Guogang Xiong; Jun Cheng; Xinyu Wu; Yen-Lun Chen; Yongsheng Ou; Yangsheng Xu

Abnormal crowd behavior detection plays an important role in surveillance applications. We propose a camera parameter independent and perspective distortion invariant approach to detect two types of abnormal crowd behavior. The two typical abnormal activities are people gathering and running. Since people counting is necessary for detecting the abnormal crowd behavior, we present an potential energy-based model to estimate the number of people in public scenes. Building histograms on the X- and Y-axes, respectively, we can obtain probability distribution of the foreground object and then define crowd entropy. We define the Crowd Distribution Index by combining the people counting results with crowd entropy to represent the spatial distribution of crowd. We set a threshold on Crowd Distribution Index to detect people gathering. To detect people running, the kinetic energy is determined by computation of optical flow and Crowd Distribution Index. With a threshold, kinetic energy can be used to detect people running. To test the performance of our algorithm, videos of different scenes and different crowd densities are used in the experiments. Without camera calibration and training data, our method can robustly detect abnormal behaviors with low computation load.


Journal of Intelligent and Robotic Systems | 2015

Online Dynamic Gesture Recognition for Human Robot Interaction

Dan Xu; Xinyu Wu; Yen-Lun Chen; Yangsheng Xu

This paper presents an online dynamic hand gesture recognition system with an RGB-D camera, which can automatically recognize hand gestures against complicated background. For background subtraction, we use a model-based method to perform human detection and segmentation in the depth map. Since a robust hand tracking approach is crucial for the performance of hand gesture recognition, our system uses both color information and depth information in the process of hand tracking. To extract spatio-temporal hand gesture sequences in the trajectory, a reliable gesture spotting scheme with detection on change of static postures is proposed. Then discrete HMMs with Left-Right Banded (LRB) topology are utilized to model and classify gestures based on multi-feature representation and quantization of the hand gesture sequences. Experimental evaluations on two self-built databases of dynamic hand gestures show the effectiveness of the proposed system. Furthermore, we develop a human-robot interactive system, and the performance of this system is demonstrated through interactive experiments in the dynamic environment.


robotics and biomimetics | 2012

Real-time dynamic gesture recognition system based on depth perception for robot navigation

Dan Xu; Yen-Lun Chen; Chuan Lin; Xin Kong; Xinyu Wu

Natural human robot interaction based on the dynamic hand gesture is becoming a popular research topic in the past few years. The traditional dynamic gesture recognition methods are usually restricted by the factors of illumination condition, varying color and cluttered background. The recognition performance can be improved by using the hand-wearing devices but this is not a natural and barrier-free interaction. To overcome these shortcomings, the depth perception algorithm based on the Kinect depth sensor is introduced to carry out 3D hand tracking. We propose a novel start/end point detection method for segmenting the 3D hand gesture from the hand motion trajectory. Then Hidden Markov Models (HMMs) are implemented to model and classify the hand gesture sequences and the recognized gestures are converted to control commands for the interaction with the robot. Seven different hand gestures performed by two hands can sufficiently navigate the robot. Experiments show that the proposed dynamic hand gesture interaction system can work effectively in the complex environment and in real-time with an average recognition rate of 98.4%. And further experiments for the robot navigation also verify the robustness of our system.


international conference on information and automation | 2011

Abnormal crowd behavior detection based on the energy model

Guogang Xiong; Xinyu Wu; Yen-Lun Chen; Yongsheng Ou

In this paper, we present a novel method to detect two typical abnormal activities: pedestrain gathering and running. The method is based on the potential energy and kinetic energy. Reliable estimation of crowd density and crowd distribution are firstly introduced into the detection of anomalies. Estimation of crowd density is obtained from the image potential energy model. By building the foreground histogram on the X and Y axis respectively, the probability distribution of the histogram can be obtained, and then we define the Crowd Distribution Index (CDI) to represent the dispersion. The Crowd Distribution Index (CDI) is used to detect pedestrains gathering. The kinetic energy is determined by computation of optical flow and Crowd Distribution Index, and then used to detect people running. The detection for abnormal activities is based on the threshold analysis. Without training data, the model can robustly detect abnormal behaviors in low and medium crowd density with low computation load.


IEEE Communications Letters | 2006

A simple coefficient test for cubic permutation polynomials over integer rings

Yen-Lun Chen; Jonghoon Ryu; Oscar Y. Takeshita

Permutation polynomials have been extensively studied but simple coefficient tests for permutation polynomials over integer rings are only known for limited cases. In this letter, a simple necessary and sufficient coefficient test is proven for cubic permutation polynomials over integer rings. A possible application is in the design of interleavers for turbo codes.


international conference on information science and technology | 2014

Dynamic gesture recognition using 3D trajectory

Qianqian Wang; Yuanrong Xu; Xiao Bai; Dan Xu; Yen-Lun Chen; Xinyu Wu

In this paper, we proposed an effective method which can recognize dynamic hand gesture by analyzing the information of motion trajectory captured by leap motion in three-dimension space. A simple gesture spotting is tried. And the orientation characteristics are quantified and coded as the feature after pre-processing the data. Then an improved discrete HMM algorithm is utilized to model and classify gestures. Experimental results on a self-built database of dynamic hand gestures (numbers 0-9) demonstrate the effectiveness of the proposed method.


international conference on image processing | 2013

Hierarchical activity discovery within spatio-temporal context for video anomaly detection

Dan Xu; Xinyu Wu; Dezhen Song; Nannan Li; Yen-Lun Chen

In this paper, we present a novel approach for video anomaly detection in crowded and complicated scenes. The proposed approach detects anomalies based on a hierarchical activity pattern discovery framework comprehensively considering both global and local spatio-temporal contexts. The discovery is a coarse-to-fine learning process with unsupervised ways for automatically constructing normal activity patterns at different levels. An unified anomaly energy function is designed based on these discovered activity patterns to identify the abnormal level of an input motion pattern. We demonstrate the efficiency of the proposed method on the UCSD anomaly detection datasets (Ped1 and Ped2) and compare the performance with existing work.


Neurocomputing | 2013

Classification-based learning by particle swarm optimization for wall-following robot navigation

Yen-Lun Chen; Jun Cheng; Chuan Lin; Xinyu Wu; Yongsheng Ou; Yangsheng Xu

In this paper, we study the parameter setting for a set of intelligent multi-category classifiers in wall-following robot navigation. Based on the swarm optimization theory, a particle selecting approach is proposed to search for the optimal parameters, a key property of this set of multi-category classifiers. By utilizing the particle swarm search, it is able to obtain higher classification accuracy with significant savings on the training time compared to the conventional grid search. For wall-following robot navigation, the best accuracy (98.8%) is achieved by the particle swarm search with only 1/4 of the training time by the grid search. Through communicating the social information available in particle swarms in the training process, classification-based learning can achieve higher classification accuracy without prematurity. One of such learning classifiers has been implemented in SIAT mobile robot. Experimental results validate the proposed search scheme for optimal parameter settings.


robotics and biomimetics | 2011

Integrated approach of skin-color detection and depth information for hand and face localization

Dan Xu; Yen-Lun Chen; Xinyu Wu; Yongsheng Ou; Yangsheng Xu

Real-time hand and face localization is a challenging problem in many application scenarios such as face and gesture recognition in computer vision and robotics. This paper proposes a robust method which can locate multiple faces and hands simultaneously under the changing environment of light illumination and complex background in real time by using skin-color detection and K-means in conjunction with stereoscopic depth information. Skin-color detection in color image with elliptical boundary model to identify possible areas of human skin-color. The depth map captured from three-dimensional depth sensor Kinect is used to segment human body to remove the skin-color noise in the background and separate hands from face based on their positions. Then K-means algorithm clusters the detected skin-color pixels into three blobs to obtain the center points of hands and face. Experimental results demonstrate that the proposed method in localization of human hands and face is robust over various body postures and complex environmental circumstances in video sequences.


robotics and biomimetics | 2011

Binocular vision positioning for robot grasping

Hao Li; Yen-Lun Chen; Tianhai Chang; Xinyu Wu; Yongsheng Ou; Yangsheng Xu

By position estimation of the binocular vision, a robot can accurately obtain the three-dimensional information of an object. The algorithms about camera calibration and stereo match are described in this paper in detail. Not only that, two kinds of camera calibration methods are compared and analyzed in this paper. The method for robot grasping is proved to be effective from experiments. Research directions for future improvements are also presented.

Collaboration


Dive into the Yen-Lun Chen's collaboration.

Top Co-Authors

Avatar

Xinyu Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yongsheng Ou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yangsheng Xu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Nannan Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dan Xu

University of Trento

View shared research outputs
Top Co-Authors

Avatar

Qianqian Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuanrong Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dan Xu

University of Trento

View shared research outputs
Top Co-Authors

Avatar

Chuan Lin

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guogang Xiong

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge