Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mithun George Jacob is active.

Publication


Featured researches published by Mithun George Jacob.


Pattern Recognition Letters | 2014

Context-based hand gesture recognition for the operating room

Mithun George Jacob; Juan P. Wachs

A sterile, intuitive context-integrated system for navigating MRIs through freehand gestures during a neurobiopsy procedure is presented. Contextual cues are used to determine the intent of the user to improve continuous gesture recognition, and the discovery and exploration of MRIs. One of the challenges in gesture interaction in the operating room is to discriminate between intentional and non-intentional gestures. This problem is also referred as spotting. In this paper, a novel method for training gesture spotting networks is presented. The continuous gesture recognition system was shown to successfully detect gestures 92.26% of the time with a reliability of 89.97%. Experimental results show that significant improvements in task completion time were obtained through the effect of context integration.


Journal of the American Medical Informatics Association | 2013

Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

Mithun George Jacob; Juan P. Wachs; Rebecca A. Packer

This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeons behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.


human-robot interaction | 2012

Gestonurse: a multimodal robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A novel multimodal robotic scrub nurse (RSN) system for the operating room (OR) is presented. The RSN assists the main surgeon by passing surgical instruments. Experiments were conducted to test the system with speech and gesture modalities and average instrument acquisition times were compared. Experimental results showed that 97% of the gestures were recognized correctly under changes in scale and rotation and that the multimodal system responded faster than the unimodal systems. A relationship similar in form to Fittss law for instrument picking accuracy is also presented.


systems, man and cybernetics | 2011

A gesture driven robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A gesture driven robotic scrub nurse (GRSN) for the operating room (OR) is presented. The GRSN passes surgical instruments to the surgeon during surgery which reduces the workload of a human scrub nurse. This system offers several advantages such as freeing human nurses to perform concurrent tasks, and reducing errors in the OR due to miscommunication or absence of surgical staff. Hand gestures are recognized from a video stream, converted to instructions, and sent to a robotic arm which passes the required surgical instruments to the surgeon. Experimental results show that 95% of the gestures were recognized correctly. The gesture recognition algorithm presented is robust to changes in scale and rotation of the hand gestures. The system was compared to human task performance and was found to be only 0.83 seconds slower on average.


Communications of The ACM | 2013

Collaboration with a robotic scrub nurse

Mithun George Jacob; Yu-Ting Li; George A. Akingba; Juan P. Wachs

Surgeons use hand gestures and/or voice commands without interrupting the natural flow of a procedure.


iberoamerican congress on pattern recognition | 2012

Intention, Context and Gesture Recognition for Sterile MRI Navigation in the Operating Room

Mithun George Jacob; Christopher Cange; Rebecca A. Packer; Juan P. Wachs

Human-Computer Interaction (HCI) devices such as the keyboard and the mouse are among the most contaminated regions in an operating room (OR). This paper proposes a sterile, intuitive HCI to navigate MRI images using freehand gestures. The system incorporates contextual cues and intent of the user to strengthen the gesture recognition process. Experimental results showed that while performing an image navigation task, mean intent recognition accuracy was 98.7% and that the false positive rate of gesture recognition dropped from 20.76% to 2.33% with context integration at similar recognition rates.


Surgical Innovation | 2013

A Cyber-Physical Management System for Delivering and Monitoring Surgical Instruments in the OR

Yu-Ting Li; Mithun George Jacob; George A. Akingba; Juan P. Wachs

Background. The standard practice in the operating room (OR) is having a surgical technician deliver surgical instruments to the surgeon quickly and inexpensively, as required. This human “in the loop” system may result in mistakes (eg, missing information, ambiguity of instructions, and delays). Objective. Errors can be reduced or eliminated by integrating information technology (IT) and cybernetics into the OR. Gesture and voice automatic acquisition, processing, and interpretation allow interaction with these new systems without disturbing the normal flow of surgery. Methods. This article describes the development of a cyber-physical management system (CPS), including a robotic scrub nurse, to support surgeons by passing surgical instruments during surgery as required and recording counts of surgical instruments into a personal health record (PHR). The robot used responds to hand signals and voice messages detected through sophisticated computer vision and data mining techniques. Results. The CPS was tested during a mock surgery in the OR. The in situ experiment showed that the robot recognized hand gestures reliably (with an accuracy of 97%), it can retrieve instruments as close as 25 mm, and the total delivery time was less than 3 s on average. Conclusions. This online health tool allows the exchange of clinical and surgical information to electronic medical record–based and PHR-based applications among different hospitals, regardless of the style viewer. The CPS has the potential to be adopted in the OR to handle surgical instruments and track them in a safe and accurate manner, releasing the human scrub tech from these tasks.


Proceedings of SPIE | 2012

Does a robotic scrub nurse improve economy of movements

Juan P. Wachs; Mithun George Jacob; Yu-Ting Li; George A. Akingba

Objective: Robotic assistance during surgery has been shown to be a useful resource to both augment the surgical skills of the surgeon through tele-operation, and to assist the surgeon handling the surgical instruments to the surgeon, similar to a surgical tech. We evaluated the performance and effect of a gesture driven surgical robotic nurse in the context of economy of movements, during an abdominal incision and closure exercise with a simulator. Methods: A longitudinal midline incision (100 mm) was performed on the simulated abdominal wall to enter the peritoneal cavity without damaging the internal organs. The wound was then closed using a blunt needle ensuring that no tissue is caught up by the suture material. All the instruments required to complete this task were delivered by a robotic surgical manipulator directly to the surgeon. The instruments were requested through voice and gesture recognition. The robotic system used a low end range sensor camera to extract the hand poses and for recognizing the gestures. The instruments were delivered to the vicinity of the patient, at chest height and at a reachable distance to the surgeon. Task performance measures for each of three abdominal incision and closure exercises were measured and compared to a human scrub nurse instrument delivery action. Picking instrument position variance, completion time and trajectory of the hand were recorded for further analysis. Results: The variance of the position of the robotic tip when delivering the surgical instrument is compared to the same position when a human delivers the instrument. The variance was found to be 88.86% smaller compared to the human delivery group. The mean task completion time to complete the surgical exercise was 162.7± 10.1 secs for the human assistant and 191.6± 3.3 secs (P<.01) when using the robotic standard display group. Conclusion: Multimodal robotic scrub nurse assistant improves the surgical procedure by reducing the number of movements (lower variance in the picking position). The variance of the picking point is closely related to the concept of economy of movements in the operating room. Improving the effectiveness of the operating room can potentially enhance the safety of surgical interventions without affecting the performance time.


international conference on robotics and automation | 2013

Surgical instrument handling and retrieval in the operating room with a multimodal robotic assistant

Mithun George Jacob; Yu-Ting Li; Juan P. Wachs

A robotic scrub nurse (RSN) designed for safe human-robot collaboration in the operating room (OR) is presented. The RSN assists the surgical staff in the OR by delivering instruments to the surgeon and operates through a multimodal interface allowing instruments to be requested through verbal commands or touchless gestures. A machine vision algorithm was designed to recognize the hand gestures performed by the user. To ensure safe human-robot collaboration, tool-tip trajectories are planned and executed to avoid collisions with the user. Experiments were conducted to test the system when speech and gesture modalities were used to interact with the robot, separately and together. The average system times were compared while performing a mock surgical task for each modality of interaction. The effects of modality training on task completion time were also studied. It was found that training results in a significant drop of 12.92% in task completion time. Experimental results show that 95.96% of the gestures used to interact with the robot were recognized correctly, and collisions with the user were completely avoided when using a new active obstacle avoidance algorithm.


systems, man and cybernetics | 2014

Optimal modality selection for multimodal human-machine systems using RIMAG

Mithun George Jacob

Interpersonal communication in human teams is multimodal by nature and hybrid robot-human teams should be capable of utilizing diverse verbal and non-verbal communication channels (e.g. gestures, speech, and gaze). Additionally, this interaction must fulfill requirements such as speed, accuracy and resilience. While multimodal communication has been researched and human-robot mixed team communication frameworks have been developed, the computation of an effective combination of communication modalities (multimodal lexicon) to maximize effectiveness is an untapped area of research. The proposed framework objectively determines the set of optimal lexicons through multiobjective optimization of performance metrics over all feasible lexicons. The methodology is applied to the surgical setting, where a robotic nurse can collaborate with a surgical team by delivering surgical instruments as required. In this time-sensitive, high-risk context, performance metrics are obtained through a mixture of real-world experiments and simulation. Experimental results validate the predictability of the method since predicted optimal lexicons significantly (p <; 0.01) outperform predicted suboptimal lexicons in time, error rate and false positive rates.

Collaboration


Dive into the Mithun George Jacob's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge