Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yixiao Yun is active.

Publication


Featured researches published by Yixiao Yun.


Computer Vision and Image Understanding | 2016

Human fall detection in videos via boosting and fusing statistical features of appearance, shape and motion dynamics on Riemannian manifolds with applications to assisted living

Yixiao Yun; Irene Yu-Hua Gu

Dynamic features of a falling person are represented as points moving on manifolds.Human falls are characterized by velocity statistics based on geodesics.Statistical features are fused by sub-ensemble learning under a boosting framework.Comparable results are obtained to multi-camera and multi-modal methods. Display Omitted This paper addresses issues in fall detection from videos. It is commonly observed that a falling person undergoes large appearance change, shape deformation and physical displacement, thus the focus here is on the analysis of these dynamic features that vary drastically in camera views while a person falls onto the ground. A novel approach is proposed that performs such analysis on Riemannian manifolds, detecting falls from a single camera with arbitrary view angles. The main novelties of this paper include: (a) representing the dynamic appearance, shape and motion of a target person each being points moving on a different Riemannian manifold; (b) characterizing the dynamics of different features by computing velocity statistics of their corresponding manifold points, based on geodesic distances; (c) employing a feature weighting approach, where each statistical feature is weighted according to the mutual information; (d) fusing statistical features learned from different manifolds with a two-stage ensemble learning strategy under a boosting framework. Experiments have been conducted on two video datasets for fall detection. Tests, evaluations and comparisons with 6 state-of-the-art methods have provided support to the effectiveness of the proposed method.


Neurocomputing | 2016

Human fall detection in videos by fusing statistical features of shape and motion dynamics on Riemannian manifolds

Yixiao Yun; Irene Yu-Hua Gu

This paper addresses issues in fall detection in videos. We propose a novel method to detect human falls from arbitrary view angles, through analyzing dynamic shape and motion of image regions of human bodies on Riemannian manifolds. The proposed method exploits time-dependent dynamic features on smooth manifolds based on the observation that human falls often involve drastically shape changes and abrupt motions as comparing with other activities. The main novelties of this paper include: (a) representing videos of human activities by dynamic shape points and motion points moving on two separate unit n-spheres, or, two simple Riemannian manifolds; (b) characterizing the dynamic shape and motion of each video activity by computing the velocity statistics on the two manifolds, based on geodesic distances; (c) combining the statistical features of dynamic shape and motion that are learned from their corresponding manifolds via mutual information. Experiments were conducted on three video datasets, containing 400 videos of 5 activities, 100 videos of 4 activities, and 768 videos of 3 activities, respectively, where videos were captured from cameras in different view angles. Our test results have shown high detection rate (average 99.38%) and low false alarm (average 1.84%). Comparisons with eight state-of-the-art methods have provided further support to the proposed method.


international conference on image processing | 2015

Human fall detection via shape analysis on Riemannian manifolds with applications to elderly care

Yixiao Yun; Irene Yu-Hua Gu

This paper addresses issues in fall detection from videos. The focus is on the analysis of human shapes which deform drastically in camera views while a person falls onto the ground. A novel approach is proposed that performs fall detection from an arbitrary view angle, via shape analysis on a unified Riemannian manifold for different camera views. The main novelties of this paper include: (a) representing dynamic shapes as points moving on a unit n-sphere, one of the simplest Riemannian manifolds; (b) characterizing the deformation of shapes by computing velocity statistics of their corresponding manifold points, based on geodesic distances on the manifold. Experiments have been conducted on two publicly available video datasets for fall detection. Test, evaluations and comparisons with 6 existing methods show the effectiveness of our proposed method.


international conference on pattern recognition | 2014

Graph Construction for Salient Object Detection in Videos

Keren Fu; Irene Yu-Hua Gu; Yixiao Yun; Chen Gong; Jie Yang

Recently many graph-based salient region/object detection methods have been developed. They are rather effective for still images. However, little attention has been paid to salient region detection in videos. This paper addresses salient region detection in videos. A unified approach towards graph construction for salient object detection in videos is proposed. The proposed method combines static appearance and motion cues to construct graph, enabling a direct extension of original graph-based salient region detection to video processing. To maintain coherence in both intra- and inter-frames, a spatial-temporal smoothing operation is proposed on a structured graph derived from consecutive frames. The effectiveness of the proposed method is tested and validated using seven videos from two video datasets.


international conference on signal processing | 2014

Head pose classification by multi-class AdaBoost with fusion of RGB and depth images

Yixiao Yun; Mohamed Hashim Changrampadi; Irene Yu-Hua Gu

This paper addresses issues in multi-class visual object classification, where sequential learning and sensor fusion are exploited in a unified framework. We adopt a novel method for head pose classification using RGB and depth images. The main contribution of this paper is a multi-class AdaBoost classification framework where information obtained from RGB and depth modalities interactively complement each other. This is achieved by learning weak hypotheses for RGB and depth modalities independently with the same sampling weight in the boosting structure, and then fusing them through learning a sub-ensemble. Experiments are conducted on a Kinect RGB-D face image dataset containing 4098 face images in 5 different poses. Results have shown good performance in obtaining high classification rate (99.76%) with low false alarms on the dataset.


british machine vision conference | 2014

Adaptive Multi-Level Region Merging for Salient Object Detection

Keren Fu; Chen Gong; Yixiao Yun; Yijun Li; Irene Yu-Hua Gu; Jie Yang; Jingyi Yu

Most existing salient object detection algorithms face the problem of either under or over-segmenting an image. More recent methods address the problem via multi-level segmentation. However, the number of segmentation levels is manually predetermined and only works well on specific class of images. In this paper, a new salient object detection scheme is presented based on adaptive multi-level region merging. A graph based merging scheme is developed to reassemble regions based on their shared contour strength. This merging process is adaptive to complete contours of salient objects that can then be used for global perceptual analysis, e.g., foreground/ground separation. Such contour completion is enhanced by graph-based spectral decomposition. We show that even though simple region saliency measurements are adopted for each region, encouraging performance can be obtained after across-level integration. Experiments by comparing with 13 existing methods on three benchmark datasets including MSRA-1000, SOD and SED show the proposed method results in uniform object enhancement and achieves state-of-the-art performance.


international conference on acoustics, speech, and signal processing | 2012

Multi-view face pose classification by boosting with weak hypothesis fusion using visual and infrared images

Yixiao Yun; Irene Yu-Hua Gu

This paper proposes a novel method for multi-view face pose classification through sequential learning and sensor fusion. The basic idea is to use face images observed in visual and thermal infrared (IR) bands, with the same sampling weight in a multi-class boosting structure. The main contribution of this paper is a multi-class AdaBoost classification framework where information obtained from visual and infrared bands interactively complement each other. This is achieved by learning weak hypothesis for visual and IR band independently and then fusing the optimized hypothesis sub-ensembles. In addition, an effective feature descriptor is introduced to thermal IR images. Experiments are conducted on a visual and thermal IR image dataset containing 4844 face images in 5 different poses. Results have shown significant increase in classification rate as compared with an existing multi-class AdaBoost algorithm SAMME trained on visual or infrared images alone, as well as a simple baseline classification-fusion algorithm.


international conference on image processing | 2013

Riemannian manifold-based support vector machine for human activity classification in images

Yixiao Yun; Irene Yu-Hua Gu; Hamid K. Aghajan

This paper addresses the issue of classification of human activities in still images. We propose a novel method where part-based features focusing on human and object interaction are utilized for activity representation, and classification is designed on manifolds by exploiting underlying Riemannian geometry. The main contributions of the paper include: (a) represent human activity by appearance features from image patches containing hands, and by structural features formed from the distances between the torso and patch centers; (b) formulate SVM kernel function based on the geodesics on Riemannian manifolds under the log-Euclidean metric; (c) apply multi-class SVM classifier on the manifold under the one-against-all strategy. Experiments were conducted on a dataset containing 2750 images in 7 classes of activities from 10 subjects. Results have shown good performance (average classification rate of 95.83%, false positive rate of 0.71%). Comparisons with three other related classifiers provide further support to the proposed method.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Multi-View ML Object Tracking With Online Learning on Riemannian Manifolds by Combining Geometric Constraints

Yixiao Yun; Irene Yu-Hua Gu; Hamid K. Aghajan

This paper addresses issues in object tracking with occlusion scenarios, where multiple uncalibrated cameras with overlapping fields of view are exploited. We propose a novel method where tracking is first done independently in each individual view and then tracking results are mapped from different views to improve the tracking jointly. The proposed tracker uses the assumptions that objects are visible in at least one view and move uprightly on a common planar ground that may induce a homography relation between views. A method for online learning of object appearances on Riemannian manifolds is also introduced. The main novelties of the paper include: 1) define a similarity measure, based on geodesics between a candidate object and a set of mapped references from multiple views on a Riemannian manifold; 2) propose multi-view maximum likelihood estimation of object bounding box parameters, based on Gaussian-distributed geodesics on the manifold; 3) introduce online learning of object appearances on the manifold, taking into account of possible occlusions; 4) utilize projective transformations for objects between views, where parameters are estimated from warped vertical axis by combining planar homography, epipolar geometry, and vertical vanishing point; 5) embed single-view trackers in a three-layer multi-view tracking scheme. Experiments have been conducted on videos from multiple uncalibrated cameras, where objects contain long-term partial/full occlusions, or frequent intersections. Comparisons have been made with three existing methods, where the performance is evaluated both qualitatively and quantitatively. Results have shown the effectiveness of the proposed method in terms of robustness against tracking drift caused by occlusions.


international conference on acoustics, speech, and signal processing | 2016

Fall detection in RGB-D videos by combining shape and motion features

Durga Priya Kumar; Yixiao Yun; Irene Yu-Hua Gu

This paper addresses issues in fall detection from RGB-D videos. The study focuses on measuring the dynamics of shape and motion of the target person, based on the observation that a fall usually causes drastic large shape deformation and physical movement. The main novelties include: (a) forming contours of target persons in depth images based on morphological skeleton; (b) extracting local dynamic shape and motion features from target contours; (c) encoding global shape and motion in HOG and HOGOF features from RGB images; (d) combining various shape and motion features for enhanced fall detection. Experiments have been conducted on an RGB-D video dataset for fall detection. Results show the effectiveness of the proposed method.

Collaboration


Dive into the Yixiao Yun's collaboration.

Top Co-Authors

Avatar

Irene Yu-Hua Gu

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Keren Fu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Durga Priya Kumar

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohamed Hashim Changrampadi

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chen Gong

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Anders Flisberg

Sahlgrenska University Hospital

View shared research outputs
Top Co-Authors

Avatar

Christopher Innocenti

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Grzegorz Sowulewski

Chalmers University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge