Zhaojie Ju
University of Portsmouth
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhaojie Ju.
IEEE-ASME Transactions on Mechatronics | 2014
Zhaojie Ju; Honghai Liu
In order to study and analyze human hand motions that contain multimodal information, a generalized framework integrating multiple sensors is proposed and consists of modules of sensor integration, signal preprocessing, correlation study of sensory information, and motion identification. Three types of sensors are integrated to simultaneously capture the finger angle trajectories, the hand contact forces, and the forearm electromyography (EMG) signals. To facilitate the rapid acquisition of human hand tasks, methods to automatically synchronize and segment manipulation primitives are developed in the signal preprocessing module. Correlations of the sensory information are studied by using Empirical Copula and demonstrate that there exist significant relationships between muscle signals and finger trajectories and between muscle signals and contact forces. In addition, recognizing different hand grasps and manipulations based on the EMG signals is investigated by using Fuzzy Gaussian Mixture Models (FGMMs) and results of comparative experiments show FGMMs outperform Gaussian Mixture Models and support vector machine with a higher recognition rate. The proposed framework integrating the state-of-the-art sensor technology with the developed algorithms provides researchers a versatile and adaptable platform for human hand motion analysis and has potential applications especially in robotic hand or prosthetic hand control and human-computer interaction.
IEEE Sensors Journal | 2013
Zhaojie Ju; Gaoxiang Ouyang; Marzena Wilamowska-Korsak; Honghai Liu
This paper proposes and evaluates methods of nonlinear feature extraction and nonlinear classification to identify different hand manipulations based on surface electromyography (sEMG) signals. The nonlinear measures are achieved based on the recurrence plot to represent dynamical characteristics of sEMG during hand movements. Fuzzy Gaussian Mixture Models (FGMMs) are proposed and employed as a nonlinear classifier to recognise different hand grasps and in-hand manipulations captured from different subjects. Various experiments are conducted to evaluate their performance by comparing 14 individual features, 19 multifeatures and 4 different classifiers. The experimental results demonstrate the proposed nonlinear measures provide important supplemental information and they are essential to the good performance in multifeatures. It is also shown that FGMMs outperform commonly used approaches including Linear Discriminant Analysis, Gaussian Mixture Models and Support Vector Machine in terms of the recognition rate. The best performance with the recognition rate of 96.7% is achieved by using FGMMs with the multifeature combining Willison Amplitude and Determinism.
IEEE Journal of Biomedical and Health Informatics | 2014
Gaoxiang Ouyang; Xiangyang Zhu; Zhaojie Ju; Honghai Liu
Recognizing human hand grasp movements through surface electromyogram (sEMG) is a challenging task. In this paper, we investigated nonlinear measures based on recurrence plot, as a tool to evaluate the hidden dynamical characteristics of sEMG during four different hand movements. A series of experimental tests in this study show that the dynamical characteristics of sEMG data with recurrence quantification analysis (RQA) can distinguish different hand grasp movements. Meanwhile, adaptive neuro-fuzzy inference system (ANFIS) is applied to evaluate the performance of the aforementioned measures to identify the grasp movements. The experimental results show that the recognition rate (99.1%) based on the combination of linear and nonlinear measures is much higher than those with only linear measures (93.4%) or nonlinear measures (88.1%). These results suggest that the RQA measures might be a potential tool to reveal the sEMG hidden characteristics of hand grasp movements and an effective supplement for the traditional linear grasp recognition methods.
international conference on intelligent robotics and applications | 2008
Zhaojie Ju; Honghai Liu; Xiangyang Zhu; Youlun Xiong
The human hand has the capability of fulfilling various everyday-life tasks using the combination of biological mechanisms, sensors and controls. How autonomously learning and controlling multifingered robots is a challenge, which holds the key to related multidisciplinary research and a wide spectrum of applications in intelligent robotics. We demonstrate the state of the art in recognizing continuous grasping gestures of human hands in this paper. We propose a novel time clustering method (TC) and modified methods based on Gaussian Mixture Models (GMMs) and Hidden Markov Models (HMMs) individually. The TC outperforms the GMM and HMM methods in terms of recognition rate and potentially in computational cost. Future work is focused on real-time recognition and grasp qualitative description.
Sensors | 2017
Disi Chen; Gongfa Li; Ying Sun; Jianyi Kong; Guozhang Jiang; Heng Tang; Zhaojie Ju; Hui Yu; Honghai Liu
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.
Paladyn: Journal of Behavioral Robotics | 2017
Pablo Gómez Esteban; Paul Baxter; Tony Belpaeme; Erik Billing; Haibin Cai; Hoang-Long Cao; Mark Coeckelbergh; Cristina Costescu; Daniel David; Albert De Beir; Yinfeng Fang; Zhaojie Ju; James Kennedy; Honghai Liu; Alexandre Mazel; Amit Kumar Pandey; Kathleen Richardson; Emmanuel Senft; Serge Thill; Greet Van de Perre; Bram Vanderborght; David Vernon; Hui Yu; Tom Ziemke
Abstract Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
IEEE Systems Journal | 2017
Zhaojie Ju; Xiaofei Ji; Jing Li; Honghai Liu
This paper proposes a novel framework to segment hand gestures in RGB-depth (RGB-D) images captured by Kinect using humanlike approaches for human–robot interaction. The goal is to reduce the error of Kinect sensing and, consequently, to improve the precision of hand gesture segmentation for robot NAO. The proposed framework consists of two main novel approaches. First, the depth map and RGB image are aligned by using the genetic algorithm to estimate key points, and the alignment is robust to uncertainties of the extracted point numbers. Then, a novel approach is proposed to refine the edge of the tracked hand gestures in RGB images by applying a modified expectation–maximization (EM) algorithm based on Bayesian networks. The experimental results demonstrate that the proposed alignment method is capable of precisely matching the depth maps with RGB images, and the EM algorithm further effectively adjusts the RGB edges of the segmented hand gestures. The proposed framework has been integrated and validated in a system of human–robot interaction to improve NAO robots performance of understanding and interpretation.
Multimedia Tools and Applications | 2016
Zhaojie Ju; Dongxu Gao; Jiangtao Cao; Honghai Liu
This paper proposes a novel approach to extract human hand gesture features in real-time from RGB-D images based on the earth mover’s distance and Lasso algorithms. Firstly, hand gestures with hand edge contour are segmented using a contour length information based de-noise method. A modified finger earth mover’s distance algorithm is then applied applied to locate the palm image and extract fingertip features. Lastly and more importantly, a Lasso algorithm is proposed to effectively and efficiently extract the fingertip feature from a hand contour curve. Experimental results are discussed to demonstrate the effectiveness of the proposed approach.
International Journal of Humanoid Robotics | 2011
Zhaojie Ju; Xiangyang Zhu; Honghai Liu
Current tendency of electromyography (EMG)-based prosthetic hand is to enable the user to perform complex grasps or manipulations with natural muscle movements. In this paper, empirical copula-based templates; including the unified motion template and the state-based motion template, are introduced to identify the naturally contracted surface EMG (sEMG) patterns for hand motion recognition. The unified motion template utilizes a dependence structure as a motion template, which includes one-to-one correlations of the SEMG feature channels with all the sampling points, while the state-based motion template divides the sampling points into different states and takes the union of the dependence structures of the different states. Comparison results have demonstrated that the proposed Empirical Copula-based methods can successfully classify different hand motions from different subjects with better recognition rates than Gaussian mixture models (GMMs). In addition, the state-based motion template has a better performance than the unified motion template especially for the complex hand motions.
Sensors | 2017
Yajie Liao; Ying Sun; Gongfa Li; Jianyi Kong; Guozhang Jiang; Du Jiang; Haibin Cai; Zhaojie Ju; Hui Yu; Honghai Liu
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.