Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yu-Ting Li is active.

Publication


Featured researches published by Yu-Ting Li.


Pattern Recognition | 2014

HEGM: A hierarchical elastic graph matching for hand gesture recognition

Yu-Ting Li; Juan P. Wachs

A hierarchical scheme for elastic graph matching applied to hand gesture recognition is proposed. The proposed algorithm exploits the relative discriminatory capabilities of visual features scattered on the images, assigning the corresponding weights to each feature. A boosting algorithm is used to determine the structure of the hierarchy of a given graph. The graph is expressed by annotating the nodes of interest over the target object to form a bunch graph. Three annotation techniques, manual, semi-automatic, and automatic annotation are used to determine the position of the nodes. The scheme and the annotation approaches are applied to explore the hand gesture recognition performance. A number of filter banks are applied to hand gestures images to investigate the effect of using different feature representation approaches. Experimental results show that the hierarchical elastic graph matching (HEGM) approach classified the hand posture with a gesture recognition accuracy of 99.85% when visual features were extracted by utilizing the Histogram of Oriented Gradient (HOG) representation. The results also provide the performance measures from the aspect of recognition accuracy to matching benefits, node positions correlation and consistency on three annotation approaches, showing that the semi-automatic annotation method is more efficient and accurate than the other two methods.


human-robot interaction | 2012

Gestonurse: a multimodal robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A novel multimodal robotic scrub nurse (RSN) system for the operating room (OR) is presented. The RSN assists the main surgeon by passing surgical instruments. Experiments were conducted to test the system with speech and gesture modalities and average instrument acquisition times were compared. Experimental results showed that 97% of the gestures were recognized correctly under changes in scale and rotation and that the multimodal system responded faster than the unimodal systems. A relationship similar in form to Fittss law for instrument picking accuracy is also presented.


systems, man and cybernetics | 2011

A gesture driven robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A gesture driven robotic scrub nurse (GRSN) for the operating room (OR) is presented. The GRSN passes surgical instruments to the surgeon during surgery which reduces the workload of a human scrub nurse. This system offers several advantages such as freeing human nurses to perform concurrent tasks, and reducing errors in the OR due to miscommunication or absence of surgical staff. Hand gestures are recognized from a video stream, converted to instructions, and sent to a robotic arm which passes the required surgical instruments to the surgeon. Experimental results show that 95% of the gestures were recognized correctly. The gesture recognition algorithm presented is robust to changes in scale and rotation of the hand gestures. The system was compared to human task performance and was found to be only 0.83 seconds slower on average.


Communications of The ACM | 2013

Collaboration with a robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; George A. Akingba; Juan P. Wachs

Surgeons use hand gestures and/or voice commands without interrupting the natural flow of a procedure.


Surgical Innovation | 2013

A Cyber-Physical Management System for Delivering and Monitoring Surgical Instruments in the OR

Yu-Ting Li; Mithun George Jacob; George A. Akingba; Juan P. Wachs

Background. The standard practice in the operating room (OR) is having a surgical technician deliver surgical instruments to the surgeon quickly and inexpensively, as required. This human “in the loop” system may result in mistakes (eg, missing information, ambiguity of instructions, and delays). Objective. Errors can be reduced or eliminated by integrating information technology (IT) and cybernetics into the OR. Gesture and voice automatic acquisition, processing, and interpretation allow interaction with these new systems without disturbing the normal flow of surgery. Methods. This article describes the development of a cyber-physical management system (CPS), including a robotic scrub nurse, to support surgeons by passing surgical instruments during surgery as required and recording counts of surgical instruments into a personal health record (PHR). The robot used responds to hand signals and voice messages detected through sophisticated computer vision and data mining techniques. Results. The CPS was tested during a mock surgery in the OR. The in situ experiment showed that the robot recognized hand gestures reliably (with an accuracy of 97%), it can retrieve instruments as close as 25 mm, and the total delivery time was less than 3 s on average. Conclusions. This online health tool allows the exchange of clinical and surgical information to electronic medical record–based and PHR-based applications among different hospitals, regardless of the style viewer. The CPS has the potential to be adopted in the OR to handle surgical instruments and track them in a safe and accurate manner, releasing the human scrub tech from these tasks.


Proceedings of SPIE | 2012

Does a robotic scrub nurse improve economy of movements

Juan P. Wachs; Mithun George Jacob; Yu-Ting Li; George A. Akingba

Objective: Robotic assistance during surgery has been shown to be a useful resource to both augment the surgical skills of the surgeon through tele-operation, and to assist the surgeon handling the surgical instruments to the surgeon, similar to a surgical tech. We evaluated the performance and effect of a gesture driven surgical robotic nurse in the context of economy of movements, during an abdominal incision and closure exercise with a simulator. Methods: A longitudinal midline incision (100 mm) was performed on the simulated abdominal wall to enter the peritoneal cavity without damaging the internal organs. The wound was then closed using a blunt needle ensuring that no tissue is caught up by the suture material. All the instruments required to complete this task were delivered by a robotic surgical manipulator directly to the surgeon. The instruments were requested through voice and gesture recognition. The robotic system used a low end range sensor camera to extract the hand poses and for recognizing the gestures. The instruments were delivered to the vicinity of the patient, at chest height and at a reachable distance to the surgeon. Task performance measures for each of three abdominal incision and closure exercises were measured and compared to a human scrub nurse instrument delivery action. Picking instrument position variance, completion time and trajectory of the hand were recorded for further analysis. Results: The variance of the position of the robotic tip when delivering the surgical instrument is compared to the same position when a human delivers the instrument. The variance was found to be 88.86% smaller compared to the human delivery group. The mean task completion time to complete the surgical exercise was 162.7± 10.1 secs for the human assistant and 191.6± 3.3 secs (P<.01) when using the robotic standard display group. Conclusion: Multimodal robotic scrub nurse assistant improves the surgical procedure by reducing the number of movements (lower variance in the picking position). The variance of the picking point is closely related to the concept of economy of movements in the operating room. Improving the effectiveness of the operating room can potentially enhance the safety of surgical interventions without affecting the performance time.


international conference on robotics and automation | 2013

Surgical instrument handling and retrieval in the operating room with a multimodal robotic assistant

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A robotic scrub nurse (RSN) designed for safe human-robot collaboration in the operating room (OR) is presented. The RSN assists the surgical staff in the OR by delivering instruments to the surgeon and operates through a multimodal interface allowing instruments to be requested through verbal commands or touchless gestures. A machine vision algorithm was designed to recognize the hand gestures performed by the user. To ensure safe human-robot collaboration, tool-tip trajectories are planned and executed to avoid collisions with the user. Experiments were conducted to test the system when speech and gesture modalities were used to interact with the robot, separately and together. The average system times were compared while performing a mock surgical task for each modality of interaction. The effects of modality training on task completion time were also studied. It was found that training results in a significant drop of 12.92% in task completion time. Experimental results show that 95.96% of the gestures used to interact with the robot were recognized correctly, and collisions with the user were completely avoided when using a new active obstacle avoidance algorithm.


Ksii Transactions on Internet and Information Systems | 2017

The Effect of Embodied Interaction in Visual-Spatial Navigation

Ting Zhang; Yu-Ting Li; Juan P. Wachs

This article aims to assess the effect of embodied interaction on attention during the process of solving spatio-visual navigation problems. It presents a method that links operators physical interaction, feedback, and attention. Attention is inferred through networks called Bayesian Attentional Networks (BANs). BANs are structures that describe cause-effect relationship between attention and physical action. Then, a utility function is used to determine the best combination of interaction modalities and feedback. Experiments involving five physical interaction modalities (vision-based gesture interaction, glove-based gesture interaction, speech, feet, and body stance) and two feedback modalities (visual and sound) are described. The main findings are: (i) physical expressions have an effect in the quality of the solutions to spatial navigation problems; (ii) the combination of feet gestures with visual feedback provides the best task performance.


iberoamerican congress on pattern recognition | 2012

Hierarchical Elastic Graph Matching for Hand Gesture Recognition

Yu-Ting Li; Juan P. Wachs

This paper proposes a hierarchical scheme for elastic graph matching hand posture recognition. The hierarchy is expressed in terms of weights assigned to visual features scattered over an elastic graph. The weights in graph’s nodes are adapted according to their relative ability to enhance the recognition, and determined using adaptive boosting. A dictionary representing the variability of each gesture class is proposed, in the form of a collection of graphs (a bunch graph). Positions of nodes in the bunch graph are created using three techniques: manually, semi-automatic, and automatically. The recognition results show that the hierarchical weighting on features has significant discriminative power compared to the classic method (uniform weighting). Experimental results also show that the semi-automatically annotation method provides efficient and accurate performance in terms of two performance measures; cost function and accuracy.


Journal of Robotic Surgery | 2012

Gestonurse: a robotic surgical nurse for handling surgical instruments in the operating room.

Mithun George Jacob; Yu-Ting Li; George A. Akingba; Juan P. Wachs

Collaboration


Dive into the Yu-Ting Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge