Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yiran Shen is active.

Publication


Featured researches published by Yiran Shen.


international conference on embedded networked sensor systems | 2012

Efficient background subtraction for real-time tracking in embedded camera networks

Yiran Shen; Wen Hu; Junbin Liu; Mingrui Yang; Bo Wei; Chun Tung Chou

Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.


IEEE Sensors Journal | 2013

Nonuniform Compressive Sensing for Heterogeneous Wireless Sensor Networks

Yiran Shen; Wen Hu; Rajib Rana; Chun Tung Chou

In this paper, we consider the problem of using wireless sensor networks (WSNs) to measure the temporal-spatial profile of some physical phenomena. We base our work on two observations. First, most physical phenomena are compressible in some transform domain basis. Second, most WSNs have some form of heterogeneity. Given these two observations, we propose a nonuniform compressive sensing method to improve the performance of WSNs by exploiting both compressibility and heterogeneity. We apply our proposed method to real WSN data sets. We find that our method can provide a more accurate temporal-spatial profile for a given energy budget compared with other sampling methods.


information processing in sensor networks | 2014

Face recognition on smartphones via optimised sparse representation classification

Yiran Shen; Wen Hu; Mingrui Yang; Bo Wei; Simon Lucey; Chun Tung Chou

Face recognition is an element of many smartphone apps, e.g. face unlocking, people tagging and games. Sparse Representation Classification (SRC) is a state-of-the-art face recognition algorithm, which has been shown to outperform many classical face recognition algorithms in OpenCV. The success of SRC is due to its use of ℓ1 optimisation, which makes SRC robust to noise and occlusions. Since ℓ1 optimisation is computationally intensive, SRC uses random projection matrices to reduce the dimension of the ℓ1 problem. However, random projection matrices do not give consistent classification accuracy. In this paper, we propose a method to optimise the projection matrix for ℓ1-based classification1. Our evaluations, based on publicly available databases and real experiment, show that face recognition based on the optimised projection matrix can be 5-17% more accurate than its random counterpart and OpenCV algorithms. Furthermore, the optimised projection matrix does not have to be re-calculated even if new faces are added to the training set. We implement the SRC with optimised projection matrix on Android smartphones and find that the computation of residuals in SRC is a severe bottleneck, taking up 85-90% of the computation time. To address this problem, we propose a method to compute the residuals approximately, which is 50 times faster but without sacrificing recognition accuracy. Lastly, we demonstrate the feasibility of our new algorithm by the implementation and evaluation of a new face unlocking app and show its robustness to variation to poses, facial expressions, lighting changes and occlusions.


international conference on embedded networked sensor systems | 2013

Real-time classification via sparse representation in acoustic sensor networks

Bo Wei; Mingrui Yang; Yiran Shen; Rajib Rana; Chun Tung Chou; Wen Hu

Acoustic Sensor Networks (ASNs) have a wide range of applications in natural and urban environment monitoring, as well as indoor activity monitoring. In-network classification is critically important in ASNs because wireless transmission costs several orders of magnitude more energy than computation. The main challenges of in-network classification in ASNs include effective feature selection, intensive computation requirement and high noise levels. To address these challenges, we propose a sparse representation based feature-less, low computational cost, and noise resilient framework for in-network classification in ASNs. The key component of Sparse Approximation based Classification (SAC), ℓ1 minimization, is a convex optimization problem, and is known to be computationally expensive. Furthermore, SAC algorithms assumes that the test samples are a linear combination of a few training samples in the training sets. For acoustic applications, this results in a very large training dictionary, making the computation infeasible to be performed on resource constrained ASN platforms. Therefore, we propose several techniques to reduce the size of the problem, so as to fit SAC for in-network classification in ASNs. Our extensive evaluation using two real-life datasets (consisting of calls from 14 frog species and 20 cricket species respectively) shows that the proposed SAC framework outperforms conventional approaches such as Support Vector Machines (SVMs) and k-Nearest Neighbor (kNN) in terms of classification accuracy and robustness. Moreover, our SAC approach can deal with multi-label classification which is common in ASNs. Finally, we explore the system design spaces and demonstrate the real-time feasibility of the proposed framework by the implementation and evaluation of an acoustic classification application on an embedded ASN testbed.


international conference on intelligent sensors, sensor networks and information processing | 2011

Non-uniform compressive sensing in wireless sensor networks: Feasibility and application

Yiran Shen; Wen Hu; Rajib Rana; Chun Tung Chou

In this paper, we consider the problem of using wireless sensor networks (WSNs) to measure the temporal-spatial profile of some physical phenomena. We base our work on two observations. Firstly, most physical phenomena are compressible in some transform domain basis. Secondly, most WSNs have some form of heterogeneity. Given these two observations, we propose a non-uniform compressive sensing method to improve the performance of WSNs by exploiting both compressibility and heterogeneity. We apply our proposed method to a real WSN data set. We find that our method can provide a more accurate temporal-spatial profile for a given energy budget compared with other sampling methods.


IEEE Transactions on Mobile Computing | 2016

Real-Time and Robust Compressive Background Subtraction for Embedded Camera Networks

Yiran Shen; Wen Hu; Mingrui Yang; Junbin Liu; Bo Wei; Simon Lucey; Chun Tung Chou

Real-time target tracking is an important service provided by embedded camera networks. The first step in target tracking is to extract the moving targets from the video frames, which can be realised by using background subtraction. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computationally efficient. We propose a baseline version which uses luminance only and then extend it to use colour information. The key idea is to use random projection matrics to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, to show the computational efficiency of our methods is not platform specific, we implement it on various platforms. The real implementation shows that our proposed method is consistently better and is up to six times faster, and consume significantly less resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.


the internet of things | 2017

Gait-Watch: A Context-aware Authentication System for Smart Watch Based on Gait Recognition

Weitao Xu; Yiran Shen; Yongtuo Zhang; Neil W. Bergmann; Wen Hu

With recent advances in mobile computing and sensing technology, smart wearable devices have pervaded our everyday lives. The security of these wearable devices is becoming a hot research topic because they store various private information. Existing approaches either only rely on a secret PIN number or require an explicit user authentication process. In this paper, we present Gait-watch, a context-aware authentication system for smart watch based on gait recognition. We address the problem of recognizing the user under various walking activities (e.g., walking normally, walking with calling the phone), and propose a sparse fusion method to improve recognition accuracy. Extensive evaluations show that Gait-watch improves recognition accuracy by up to 20% by leveraging the activity information, and the proposed sparse fusion method is 10% better than several state-of-the-art gait recognition methods. We also report a user study to demonstrate that Gait-watch can accurately authenticate the user in real world scenarios and require low system cost.


information processing in sensor networks | 2016

Sensor-assisted face recognition system on smart glass via multi-view sparse representation classification

Weitao Xu; Yiran Shen; Neil W. Bergmann; Wen Hu

Face recognition is one of the most popular research problems on various platforms. New research issues arise when it comes to resource constrained devices, such as smart glasses, due to the overwhelming computation and energy requirements of the accurate face recognition methods. In this paper, we propose a robust and efficient sensor-assisted face recognition system on smart glasses by exploring the power of multimodal sensors including the camera and Inertial Measurement Unit (IMU) sensors. The system is based on a novel face recognition algorithm, namely Multi-view Sparse Representation Classification (MVSRC), by exploiting the prolific information among multi-view face images. To improve the efficiency of MVSRC on smart glasses, we propose a novel sampling optimization strategy using the less expensive inertial sensors. Our evaluations on public and private datasets show that the proposed method is up to 10% more accurate than the state-of-the-art multi-view face recognition methods while its computation cost is in the same order as an efficient benchmark method (e.g., Eigenfaces). Finally, extensive real-world experiments show that our proposed system improves recognition accuracy by up to 15% while achieving the same level of system overhead compared to the existing face recognition system (OpenCV algorithms) on smart glasses.


information processing in sensor networks | 2012

Efficient background subtraction for tracking in embedded camera networks

Yiran Shen; Wen Hu; Mingrui Yang; Junbin Liu; Chun Tung Chou

Background subtraction is often the first step in many computer vision applications such as object localisation and tracking. It aims to segment out moving parts of a scene that represent object of interests. In the field of computer vision, researchers have dedicated their efforts to improve the robustness and accuracy of such segmentations but most of their methods are computationally intensive, making them nonviable options for our targeted embedded camera platform whose energy and processing power is significantly more con-strained. To address this problem as well as maintain an acceptable level of performance, we introduce Compressive Sensing (CS) to the widely used Mixture of Gaussian to create a new background subtraction method. The results show that our method not only can decrease the computation significantly (a factor of 7 in a DSP setting) but remains comparably accurate.


IEEE Transactions on Mobile Computing | 2018

Sensor-Assisted Multi-View Face Recognition System on Smart Glass

Weitao Xu; Yiran Shen; Neil W. Bergmann; Wen Hu

Face recognition is a hot research topic with a variety of application possibilities, including video surveillance and mobile payment. It has been well researched in traditional computer vision community. However, new research issues arise when it comes to resource constrained devices, such as smart glasses, due to the overwhelming computation and energy requirements of the accurate face recognition methods. In this paper, we propose a robust and efficient sensor-assisted face recognition system on smart glasses by exploring the power of multimodal sensors including the camera and Inertial Measurement Unit (IMU) sensors. The system is based on a novel face recognition algorithm, namely Multi-view Sparse Representation Classification (MVSRC), by exploiting the prolific information among multi-view face images. To improve the efficiency of MVSRC on smart glasses, we propose two novel sampling optimization strategies using the less expensive inertial sensors. Our evaluations on public and private datasets show that the proposed method is up to 10 percent more accurate than the state-of-the-art multi-view face recognition methods while its computation cost is the same order as an efficient benchmark method (e.g., Eigenfaces). Finally, extensive real-world experiments show that our proposed system improves recognition accuracy by up to 15 percent while achieving the same level of system overhead compared to the existing face recognition system (OpenCV algorithms) on smart glasses.

Collaboration


Dive into the Yiran Shen's collaboration.

Top Co-Authors

Avatar

Wen Hu

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Chun Tung Chou

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Bo Wei

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Mingrui Yang

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weitao Xu

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junbin Liu

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajib Rana

University of Southern Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge