Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xianta Jiang is active.

Publication


Featured researches published by Xianta Jiang.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015

Pupil responses to continuous aiming movements

Xianta Jiang; Bin Zheng; Roman Bednarik; M. Stella Atkins

Abstract Pupillary response is associated with perceptual and cognitive loads in visual and cognitive tasks, but no quantitative link between pupil response and the task workload in visual−motor tasks has been confirmed. The objective of this study is to investigate how the changes of task requirement of a visual−motor task are reflected by the changes of pupil size. In the present study, a simple continuous aiming task is performed and the task requirement is manipulated and measured by Fitts’ Index of Difficulty (ID), calculated for different combinations of the target size and movement distance. Pupil response is recorded using a remote eye-tracker. The results show that event-triggered pupil dilations in continuous aiming movements respect Fitts’ Law, such that higher task difficulty evokes higher peak pupil dilation and longer peak duration. These findings suggest that pupil diameter can be employed as a physiological indicator to task workload evoked by the task requirement in visual−motor tasks.


eye tracking research & application | 2012

Saccadic delays on targets while watching videos

M. Stella Atkins; Xianta Jiang; Geoffrey Tien; Bin Zheng

To observe whether there is a difference in eye gaze between doing a task, and watching a video of the task, we recorded the gaze of 17 subjects performing a simple surgical eye-hand coordination task. We also recorded eye gaze of the same subjects later while they were watching videos of their performance. We divided the task into 9 or more sub-tasks, each of which involved a large hand movement to a new target location. We analyzed the videos manually and located the video frame for each sub-task where the operators saccadic movement began, and the frame where the watchers eye movement began. We found a consistent delay of about 600 ms between initial eye movement when doing the task, and initial eye movement when watching the task, observed in 96.3% of the sub-tasks. For the first time, we have quantified the differences between doing and watching a manual task. This will help develop gaze-based training strategies for manual tasks.


Surgical Innovation | 2015

Detection of Changes in Surgical Difficulty Evidence From Pupil Responses

Bin Zheng; Xianta Jiang; M. Stella Atkins

Background. Assessing the workload of surgeons requires technology to continuously monitor surgeons’ behaviors without interfering with their performance. We investigated the feasibility of using eye-tracking to reveal surgeons’ response to increasing task difficulty. Methods. A controlled study was conducted in a simulated operating room, where 14 subjects were required to perform a laparoscopic procedure that includes 9 subtasks. The subtasks could be divided into 3 types with different levels of task difficulty, calculated by the index of task difficulty (ID) proposed by Fitts in 1954. Pupillary responses of subjects in performing the procedure were recorded using Tobii eye-tracking equipment. Peak pupil dilation and movement time were compared between subtasks with different IDs as well as between fast moving and slow aiming phases within each subtask. Results. When the task difficulty was increased, task completion time increased. Meanwhile, the subjects’ peak pupil size also increased. As the entire procedure was performed continuously, we found that pupil responses were not only affected by the ID in the current subtask but also influenced by subtasks before and after. Discussion. Decomposing a surgical procedure into meaningful subtasks and examining the surgeon’s pupil response to each subtask enables us to identify the challenging steps within a continuous surgical procedure. Psychomotor evidence on surgeon’s performance may lead to an innovation for designing a task-specific training curriculum.


ieee international conference on biomedical robotics and biomechatronics | 2016

Wearable band for hand gesture recognition based on strain sensors

Andrea Ferrone; Francesco Maita; Luca Maiolo; M. Arquilla; A. Castiello; A. Pecora; Xianta Jiang; Carlo Menon; Lorenzo Colace

A novel fully wearable system based on a smart wristband equipped with stretchable strain gauge sensors and readout electronics have been assembled and tested to detect a set of movements of a hand crucial in rehabilitation procedures. The high sensitivity of the active devices embedded on the wristband do not need a direct contact with the skin, thus maximizing the comfort on the arm of the tester. The gestures done with the device have been auto-labeled by comparing the signals detected in real-time by the sensors with a commercial infrared device (Leap motion). Finally, the system has been evaluated with two machine-learning algorithms Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), reaching a reproducibility of 98% and 94%, respectively.


IEEE Transactions on Human-Machine Systems | 2018

Force Exertion Affects Grasp Classification Using Force Myography

Xianta Jiang; Lukas-Karim Merhi; Carlo Menon

This paper describes a study that explores the force exertion effect on the classification of grasps using a force myography (FMG) technology. Nine participants were recruited to the study; each performed a set of 16 different grasps from a grasp taxonomy using eight different levels of force, respectively. Their wrist muscle pressure was recorded using an array of 16 force sensing resistors. A linear discriminant analysis model was trained by grasps at a single force level using the natural grasping force to classify grasps generated by eight different levels of force. The results show that the grasping force significantly affects the accuracy of grasp classification such that a grasping force closer to the natural force achieves a higher accuracy. A still acceptable classification performance can be achieved for approximately half of the natural grasping force. The findings of this study help the understanding of how force exertion can affect grasp recognition using FMG. Knowledge gained from this study will provide guidance for the implementation of gesture control interfaces in terms of grasping force variations.


Medical Engineering & Physics | 2017

Exploration of Force Myography and surface Electromyography in hand gesture classification

Xianta Jiang; Lukas-Karim Merhi; Zhen Gang Xiao; Carlo Menon

Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations.


Surgical Innovation | 2015

Video Processing to Locate the Tooltip Position in Surgical Eye–Hand Coordination Tasks

Xianta Jiang; Bin Zheng; M. Stella Atkins

Introduction. Trajectories of surgical instruments in laparoscopic surgery contain rich information about surgeons’ performance. In a simulation environment, instrument trajectories can be taken by motion sensors attached to the instruments. This method is not accepted by surgeons working in the operating room due to safety concerns. In this study, a novel approach of acquiring instrument trajectories from surgical videos is reported. Methods. A total of 12 surgical videos were obtained for this study. The videos were captured during simulated laparoscopic procedures where subjects were required to pick up and transport an object over 3 different targets using a laparoscopic grasper. An algorithm was developed to allow the computer to identify the tip of the grasper on each frame of video, and then compute the trajectories of grasper movement. Results. The newly developed algorithm successfully identified tool trajectories from all 12 surgical videos. To validate the accuracy of this algorithm, the location of the tooltip in these videos were also manually labeled. The rate of accurate matching between these 2 methods was 98.4% of all video frames. Discussion. Identifying tool movement from surgical videos creates an effective way to track instrument trajectories. This builds up the foundation for assessing psychomotor performance of surgeons in the operating room without jeopardizing patient safety.


Journal of Rehabilitation and Assistive Technologies Engineering | 2017

Wearable step counting using a force myography-based ankle strap:

Kelvin H. T. Chu; Xianta Jiang; Carlo Menon

Introduction Step counting can be used to estimate the activity level of people in daily life; however, commercially available accelerometer-based step counters have shown inaccuracies in detection of low-speed walking steps (<2.2 km/h), and thus are not suitable for older adults who usually walk at low speeds. This proof-of-concept study explores the feasibility of using force myography recorded at the ankle to detect low-speed steps. Methods Eight young healthy participants walked on a treadmill at three speeds (1, 1.5, and 2.0 km/h) while their force myography signals were recorded at the ankle using a customized strap embedded with an array of eight force-sensing resistors. A K-nearest neighbour model was trained and tested with the recorded data. Additional three mainstream machine learning algorithms were also employed to evaluate the performance of force myography band as a pedometer. Results Results showed a low error rate of the step detection (<1.5%) at all three walking speeds. Conclusions This study demonstrates not only the feasibility of the proposed approach but also the potential of the investigated technology to reliably monitor low-speed step counting.


Frontiers in Bioengineering and Biotechnology | 2017

Force Myography for Monitoring Grasping in Individuals with Stroke with Mild to Moderate Upper-Extremity Impairments: A Preliminary Investigation in a Controlled Environment

Gautam Sadarangani; Xianta Jiang; Lisa A. Simpson; Janice J. Eng; Carlo Menon

There is increasing research interest in technologies that can detect grasping, to encourage functional use of the hand as part of daily living, and thus promote upper-extremity motor recovery in individuals with stroke. Force myography (FMG) has been shown to be effective for providing biofeedback to improve fine motor function in structured rehabilitation settings, involving isolated repetitions of a single grasp type, elicited at a predictable time, without upper-extremity movements. The use of FMG, with machine learning techniques, to detect and distinguish between grasping and no grasping, continues to be an active area of research, in healthy individuals. The feasibility of classifying FMG for grasp detection in populations with upper-extremity impairments, in the presence of upper-extremity movements, as would be expected in daily living, has yet to be established. We explore the feasibility of FMG for this application by establishing and comparing (1) FMG-based grasp detection accuracy and (2) the amount of training data necessary for accurate grasp classification, in individuals with stroke and healthy individuals. FMG data were collected using a flexible forearm band, embedded with six force-sensitive resistors (FSRs). Eight participants with stroke, with mild to moderate upper-extremity impairments, and eight healthy participants performed 20 repetitions of three tasks that involved reaching, grasping, and moving an object in different planes of movement. A validation sensor was placed on the object to label data as corresponding to a grasp or no grasp. Grasp detection performance was evaluated using linear and non-linear classifiers. The effect of training set size on classification accuracy was also determined. FMG-based grasp detection demonstrated high accuracy of 92.2% (σ = 3.5%) for participants with stroke and 96.0% (σ = 1.6%) for healthy volunteers using a support vector machine (SVM). The use of a training set that was 50% the size of the testing set resulted in 91.7% (σ = 3.9%) accuracy for participants with stroke and 95.6% (σ = 1.6%) for healthy participants. These promising results indicate that FMG may be feasible for monitoring grasping, in the presence of upper-extremity movements, in individuals with stroke with mild to moderate upper-extremity impairments.


2016 IEEE Healthcare Innovation Point-Of-Care Technologies Conference (HI-POCT) | 2016

Ankle positions classification using force myography: An exploratory investigation

Xianta Jiang; Hon T. Chu; Zhen G Xiao; Lukas-Karim Merhi; Carlo Menon

Monitoring the movements of the ankle may be highly relevant for applications such as sport injury prevention, rehabilitation, and gait analysis. This paper explores the feasibility of employing force myography (FMG) on the distal end of the lower leg to detect ankle position. FMG signals corresponding to 7 different ankle positions were recorded from three healthy volunteers. Using a linear discriminant analysis (LDA) classifier, the system achieved averaged prediction accuracies of 94% and 85% in cross validation and cross-trial evaluation, respectively. The results of this proof-of-concept study demonstrate the feasibility of using FMG to detect ankle position and its consequent potential use for acquiring information relevant to leg movement and gait.

Collaboration


Dive into the Xianta Jiang's collaboration.

Top Co-Authors

Avatar

Carlo Menon

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Bin Zheng

University of Alberta

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roman Bednarik

University of Eastern Finland

View shared research outputs
Top Co-Authors

Avatar

A. Pecora

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Luca Maiolo

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge