Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lisha Hu is active.

Publication


Featured researches published by Lisha Hu.


Pervasive and Mobile Computing | 2014

b-COELM

Lisha Hu; Yiqiang Chen; Shuangquan Wang; Zhenyu Chen

Various mini-wearable devices have emerged in the past few years to recognize activities of daily living for users. Wearable devices are normally designed to be miniature and portable. Models running on the devices inevitably face following challenges: low-computational-complexity, lightweight and high-accuracy. In order to meet these requirements, a novel powerful activity recognition model named b-COELM is proposed in this paper. b-COELM retains the superiorities (low-computational-complexity, lightweight) of Proximal Support Vector Machine, and extends the powerful generalization ability of Extreme Learning Machine in multi-class classification problems. Experimental results show the efficiency and effectiveness of b-COELM for recognizing activities of daily living.


Pattern Recognition Letters | 2018

Deep learning for sensor-based activity recognition: A Survey

Jindong Wang; Yiqiang Chen; Shuji Hao; Xiaohui Peng; Lisha Hu

Abstract Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extraction, which could hinder their generalization performance. Additionally, existing methods are undermined for unsupervised and incremental learning tasks. Recently, the recent advancement of deep learning makes it possible to perform automatic high-level feature extraction thus achieves promising performance in many areas. Since then, deep learning based methods have been widely adopted for the sensor-based activity recognition tasks. This paper surveys the recent advance of deep learning based sensor-based activity recognition. We summarize existing literature from three aspects: sensor modality, deep model, and application. We also present detailed insights on existing work and propose grand challenges for future research.


international conference on data mining | 2015

Unobtrusive Sensing Incremental Social Contexts Using Class Incremental Learning

Zhenyu Chen; Yiqiang Chen; Xingyu Gao; Shuangquan Wang; Lisha Hu; Chenggang Clarence Yan; Nicholas D. Lane; Chunyan Miao

By utilizing captured characteristics of surrounding contexts through widely used Bluetooth sensor, user-centric social contexts can be effectively sensed and discovered by dynamic Bluetooth information. At present, state-of-the-art approaches for building classifiers can basically recognize limited classes trained in the learning phase; however, due to the complex diversity of social contextual behavior, the built classifier seldom deals with newly appeared contexts, which results in degrading the recognition performance greatly. To address this problem, we propose, an OSELM (online sequential extreme learning machine) based class incremental learning method for continuous and unobtrusive sensing new classes of social contexts from dynamic Bluetooth data alone. We integrate fuzzy clustering technique and OSELM to discover and recognize social contextual behaviors by real-world Bluetooth sensor data. Experimental results show that our method can automatically cope with incremental classes of social contexts that appear unpredictably in the real-world. Further, our proposed method have the effective recognition capability for both original known classes and newly appeared unknown classes, respectively.


ubiquitous intelligence and computing | 2016

Less Annotation on Personalized Activity Recognition Using Context Data

Lisha Hu; Yiqiang Chen; Shuangquan Wang; Jindong Wang; Jianfei Shen; Xinlong Jiang; Zhiqi Shen

Miscellaneous mini-wearable devices (e.g. wristbands, smartwatches, armbands) have emerged in our life, capable of recognizing activities of daily living, monitoring health information, so on. Conventional activity recognition (AR) models deployed inside these devices are generic classifiers learned offline from abundant data. Transferring generic model to user-oriented model requires time-consuming human effort for annotations. To solve this problem, we propose SS-ARTMAP-AR, a self-supervised incremental learning AR model updated from surrounding information such as Bluetooth, Wi-Fi, GPS, GSM data without users annotation effort. Experimental results show that SS-ARTMAP-AR can gradually adapt individual users, become more incremental intelligence.


Archive | 2015

Leveraging Two-Stage Weighted ELM for Multimodal Wearables Based Fall Detection

Zhenyu Chen; Yiqiang Chen; Lisha Hu; Shuangquan Wang; Xinlong Jiang

For the elderly people, timely detecting the fall accident is very critical to receive the first aid. In order to achieve high detection accuracy and low false-alarm rate at the same time, we propose a multimodal wearables based fall detecting and monitoring method leveraging two-stage weighted extreme learning machine. Experimental results show that our method is able to effectively implement on miniaturized wearable devices, and compared to state-of-the-art ELM classifier, we can also obtain higher detection accuracy and lower false-alarm rate simultaneously, which enables various kinds of mHealth applications in large-scale population, especially for the elderly people’s healthcare in the field of fall detection.


ubiquitous computing | 2016

AIR: recognizing activity through IR-based distance sensing on feet

Xinlong Jiang; Yiqiang Chen; Junfa Liu; Gillian R. Hayes; Lisha Hu; Jianfei Shen

In this paper, we describe the results of a controlled experiment measuring everyday movement activity through a novel recognition prototype named AIR. AIR measures distance from the feet using infrared (IR) sensors. We tested this approach for recognizing six prevalent activities: standing stationary, walking, running, walking in place, going upstairs, and going downstairs and compared results to other commonly used approaches. Our results show that AIR obtains much higher accuracy in recognizing activity than approaches that rely primarily on accelerometers. Moreover, AIR has good generalization ability when applying recognition model to new users.


International Journal of Machine Learning and Cybernetics | 2018

OKRELM: online kernelized and regularized extreme learning machine for wearable-based activity recognition

Lisha Hu; Yiqiang Chen; Jindong Wang; Chunyu Hu; Xinlong Jiang

Miscellaneous mini-wearable devices (Jawbone Up, Apple Watch, Google Glass, et al.) have emerged in recent years to recognize the user’s activities of daily living (ADLs) such as walking, running, climbing and bicycling. To better suits a target user, a generic activity recognition (AR) model inside the wearable devices requires to adapt itself according to the user’s personality in terms of wearing styles and so on. In this paper, an online kernelized and regularized extreme learning machine (OKRELM) is proposed for wearable-based activity recognition. A small-scale but important subset of every incoming data chunk is chosen to go through the update stage during the online sequential learning. Therefore, OKRELM is a lightweight incremental learning model with less time consumption during the update and prediction phase, a robust and effective classifier compared with the batch learning scheme. The performance of OKRELM is evaluated and compared with several related approaches on a UCI online available AR dataset and experimental results show the efficiency and effectiveness of OKRELM.


international symposium on wearable computers | 2015

CARE: chewing activity recognition using noninvasive single axis accelerometer

Shuangquan Wang; Gang Zhou; Lisha Hu; Zhenyu Chen; Yiqiang Chen

In this paper, we focus on users Chewing Activity REcognition and present a novel recognition system named CARE. Based on the observation that during chewing the mastication muscle contracts and accordingly bulges in some degree, CARE uses a single axis accelerometer attached on the temporalis to detect the muscle bulge and recognize users chewing activity. CARE is also able to calculate the chewing frequency through recognizing the periodic pattern of acceleration data. Experiments are conducted and the results show that CARE obtains high accuracy on both chewing activity classification and chewing frequency detection.


Pattern Recognition | 2018

A novel random forests based class incremental learning method for activity recognition

Chunyu Hu; Yiqiang Chen; Lisha Hu; Xiaohui Peng

Abstract Automatic activity recognition is an active research topic which aims to identify human activities automatically. A significant challenge is to recognize new activities effectively. In this paper, we propose an effective class incremental learning method, named Class Incremental Random Forests (CIRF), to enable existing activity recognition models to identify new activities. We design a separating axis theorem based splitting strategy to insert internal nodes and adopt Gini index or information gain to split leaves of the decision tree in the random forests (RF). With these two strategies, both inserting new nodes and splitting leaves are allowed in the incremental learning phase. We evaluate our method on three UCI public activity datasets and compare with other state-of-the-art methods. Experimental results show that the proposed incremental learning method converges to the performance of batch learning methods (RF and extremely randomized trees). Compared with other state-of-the-art methods, it is able to recognize new class data continuously with a better performance.


ieee symposium on fusion engineering | 2007

Preliminary Probabilistic Safety Assessment of Chinese Dual Functional Lithium Lead Test Blanket Module System for ITER

Lisha Hu; Wang J; Shuopei Wang

A dual functional lithium lead (DFLL) test blanket module (TBM) concept for testing in International Thermonuclear Experimental Reactor (ITER) has been proposed. The safety assessment of DFL-TBM has been carried out applying the Probabilistic Safety Assessment (PSA) approach. The accident sequences have been modeled and quantified through the event tree technique, which allows identifying all possible combinations of success or failure of the safety systems in responding to a selection of initiating events. The identification of Potential Initiator Events is provided by the Failure Mode and Effect Analysis (FMEA) procedure. The outcome of the analysis shows that DFLL-TBM is quite safe and presents no significant hazard to the environment. In addition, a sensitivity analysis of safety systems has been performed.

Collaboration


Dive into the Lisha Hu's collaboration.

Top Co-Authors

Avatar

Yiqiang Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shuangquan Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xinlong Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhenyu Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jindong Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jianfei Shen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunyu Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaohui Peng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Junfa Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Faizan Ahmad

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge