Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Eugenia Cabrera is active.

Publication


Featured researches published by Maria Eugenia Cabrera.


Surgery | 2016

Medical telementoring using an augmented reality transparent display

Daniel Andersen; Voicu Popescu; Maria Eugenia Cabrera; Aditya Shanghavi; Gerardo Gomez; Sherri Marley; Brian Mullis; Juan P. Wachs

BACKGROUND The goal of this study was to design and implement a novel surgical telementoring system called the System for Telementoring with Augmented Reality (STAR) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared with a conventional system based on a telestrator for surgical instruction. METHODS A telementoring system was developed and evaluated in a study which used a 1 × 2 between-subjects design with telementoring system, that is, STAR or conventional, as the independent variable. The participants in the study were 20 premedical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion. RESULTS When compared with the conventional system, participants using STAR completed the 2 tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task). CONCLUSIONS Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeons field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion.


human robot interaction | 2016

A comparative study for telerobotic surgery using free hand gestures

Tian Zhou; Maria Eugenia Cabrera; Juan P. Wachs; Thomas Low; Chandru P. Sundaram

This research presents an exploratory study among touch-based and touchless interfaces selected to teleoperate a highly dexterous surgical robot. The possibility of incorporating touchless interfaces into the surgical arena may provide surgeons with the ability to engage in telerobotic surgery similarly as if they were operating with their bare hands. On the other hand, precision and sensibility may be lost. To explore the advantages and drawbacks of these modalities, five interfaces were selected to send navigational commands to the Taurus robot in the system: Omega, Hydra, and a keyboard. The first represented touch-based, while Leap Motion and Kinect were selected as touchless interfaces. Three experimental designs were selected to test the system, based on standardized surgically related tasks and clinically relevant performance metrics measured to evaluate the users performance, learning rates, control stability, and interaction naturalness. The current work provides a benchmark and validation framework for the comparison of these two groups of interfaces and discusses their potential for current and future adoption in the surgical setting.


ieee international conference on automatic face gesture recognition | 2017

What Makes a Gesture a Gesture? Neural Signatures Involved in Gesture Recognition

Maria Eugenia Cabrera; Keisha D. Novak; Daniel Foti; Richard M. Voyles; Juan P. Wachs

Previous work in the area of gesture production, has made the assumption that machines can replicate humanlike gestures by connecting a bounded set of salient points in the motion trajectory. Those inflection points were hypothesized to also display cognitive saliency. The purpose of this paper is to validate that claim using electroencephalography (EEG). That is, this paper attempts to find neural signatures of gestures (also referred as placeholders) in human cognition, which facilitate the understanding, learning and repetition of gestures. Further, it is discussed whether there is a direct mapping between the placeholders and kinematic salient points in the gesture trajectories. These are expressed as relationships between inflection points in the gestures trajectories with oscillatory mu rhythms (8-12 Hz) in the EEG. This is achieved by correlating fluctuations in mu power during gesture observation with salient motion points found for each gesture. Peaks in the EEG signal at central electrodes (motor cortex; C3/Cz/C4) and occipital electrodes (visual cortex; O3/Oz/O4) were used to isolate the salient events within each gesture. We found that a linear model predicting mu peaks from motion inflections fits the data well. Increases in EEG power were detected 380 and 500ms after inflection points at occipital and central electrodes, respectively. These results suggest that coordinated activity in visual and motor cortices is sensitive to motion trajectories during gesture observation, and it is consistent with the proposal that inflection points operate as placeholders in gesture recognition.


The Visual Computer | 2016

Virtual annotations of the surgical field through an augmented reality transparent display

Daniel Andersen; Voicu Popescu; Maria Eugenia Cabrera; Aditya Shanghavi; Gerardo Gomez; Sherri Marley; Brian Mullis; Juan P. Wachs

Existing telestrator-based surgical telementoring systems require a trainee surgeon to shift focus frequently between the operating field and a nearby monitor to acquire and apply instructions from a remote mentor. We present a novel approach to surgical telementoring where annotations are superimposed directly onto the surgical field using an augmented reality (AR) simulated transparent display. We present our first steps towards realizing this vision, using two networked conventional tablets to allow a mentor to remotely annotate the operating field as seen by a trainee. Annotations are anchored to the surgical field as the trainee tablet moves and as the surgical field deforms or becomes occluded. The system is built exclusively from compact commodity-level components—all imaging and processing are performed on the two tablets.


Frontiers in Robotics and AI | 2017

A Human-Centered Approach to One-Shot Gesture Learning

Maria Eugenia Cabrera; Juan P. Wachs

This paper discusses the problem of one-shot gesture recognition using a human-centered approach and its potential application to fields such as human-robot interaction where the user’s intentions are indicated through spontaneous gesturing (one-shot). Casual users have limited time to learn the gestures interface, which makes one-shot recognition an attractive alternative to interface customization. In the aim of natural interaction with machines, a framework must be developed to include the ability of humans to understand gestures from a single observation. Previous approaches to one-shot gesture recognition have relied heavily on statistical and data-mining-based solutions, and have ignored the mechanisms that are used by humans to perceive and execute gestures and that can provide valuable context information. This omission has led to suboptimal solutions. The focus of this work is on the process that leads to the realization of a gesture, rather than on the gesture itself. In this case, context involves the way in which humans produce gestures—the kinematic and anthropometric characteristics. In the method presented here, the strategy is to generate a data set of realistic samples based on features extracted from a single gesture sample. These features, called the “gist of a gesture,” are considered to represent what humans remember when seeing a gesture and, later, the cognitive process involved when trying to replicate it. By adding meaningful variability to these features, a large training data set is created while preserving the fundamental structure of the original gesture. The availability of a large data set of realistic samples allows the use of training classifiers for future recognition. The performance of the method is evaluated using different lexicons, and its efficiency is compared with that of traditional N-shot learning approaches. The strength of the approach is further illustrated through human and machine recognition of gestures performed by a dual arm-robotic platform.


ieee international conference on automatic face gesture recognition | 2017

One-Shot Gesture Recognition: One Step Towards Adaptive Learning

Maria Eugenia Cabrera; Natalia Sanchez-Tamayo; Richard M. Voyles; Juan P. Wachs

Users intentions may be expressed through spontaneous gesturing, which have been seen only a few times or never before. Recognizing such gestures involves one shot gesture learning. While most research has focused on the recognition of the gestures themselves, recently new approaches were proposed to deal with gesture perception and production as part of the recognition problem. The framework presented in this work focuses on learning the process that leads to gesture generation, rather than treating the gestures as the outcomes of a stochastic process only. This is achieved by leveraging kinematic and cognitive aspects of human interaction. These factors enable the artificial production of realistic gesture samples originated from a single observation, which in turn are used as training sets for state-of-the-art classifiers. Classification performance is evaluated in terms of recognition accuracy and coherency; the latter being a novel metric that determines the level of agreement between humans and machines. Specifically, the referred machines are robots which perform artificially generated examples. Coherency in recognition was determined at 93.8%, corresponding to a recognition accuracy of 89.2% for the classifiers and 92.5% for human participants. A proof of concept was performed towards the expansion of the proposed one shot learning approach to adaptive learning, and the results are presented and the implications discussed.


medicine meets virtual reality | 2016

Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display.

Daniel Andersen; Voicu Popescu; Maria Eugenia Cabrera; Aditya Shanghavi; Gerardo Gomez; Sherri Marley; Brian Mullis; Juan P. Wachs

Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainees field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainees viewpoint.


international symposium on mixed and augmented reality | 2016

A Hand-Held, Self-Contained Simulated Transparent Display

Daniel Andersen; Voicu Popescu; Chengyuan Lin; Maria Eugenia Cabrera; Aditya Shanghavi; Juan P. Wachs

Hand-held transparent displays are important infrastructure for augmented reality applications. Truly transparent displays are not yet feasible in hand-held form, and a promising alternative is to simulate transparency by displaying the image the user would see if the display were not there. Previous simulated transparent displays have important limitations, such as being tethered to auxiliary workstations, requiring the user to wear obtrusive head-tracking devices, or lacking the depth acquisition support that is needed for an accurate transparency effect for close-range scenes.We describe a general simulated transparent display and three prototype implementations (P1, P2, and P3), which take advantage of emerging mobile devices and accessories. P1 uses an off-theshelf smartphone with built-in head-tracking support; P1 is compact and suitable for outdoor scenes, providing an accurate transparency effect for scene distances greater than 6m. P2 uses a tablet with a built-in depth camera; P2 is compact and suitable for short-distance indoor scenes, but the user has to hold the display in a fixed position. P3 uses a conventional tablet enhanced with on-board depth acquisition and head tracking accessories; P3 compensates for user head motion and provides accurate transparency even for closerange scenes. The prototypes are hand-held and self-contained, without the need of auxiliary workstations for computation.


human robot interaction | 2018

Coherence in One-Shot Gesture Recognition for Human-Robot Interaction

Maria Eugenia Cabrera; Richard M. Voyles; Juan P. Wachs

An experiment was conducted where a robotic platform performs artificially generated gestures and both trained classifiers and human participants recognize. Classification accuracy is evaluated through a new metric of coherence in gesture recognition between humans and robots. Experimental results showed an average recognition performance of 89.2% for the trained classifiers and 92.5% for the participants. Coherence in one-shot gesture recognition was determined to be gamma = 93.8%. This new metric provides a quantifier for validating how realistic the robotic generated gestures are.


Archive | 2016

A Comparative Study for Touchless Telerobotic Surgery

Tian Zhou; Maria Eugenia Cabrera; Juan P. Wachs

This chapter presents a comparative study among different interfaces used to teleoperate a robot to complete surgical tasks. The objective of this study is to assess the feasibility on touchless surgery and its drawbacks compared to its counterpart, touch based surgery. The five interfaces evaluated include both touch-based and touchless gaming technologies, such as Kinect, Hydra, Leap Motion, Omega 7 and a standard keyboard. The main motivation for selecting touchless controlling devices is based on direct use of the hands to perform surgical tasks without compromising the sterility required in operating rooms (OR); the trade-off when working with touchless interfaces is the loss of direct force-feedback. However, based on the paradigm of sensory substitution, feedback is provided in the form of sound and visual cues. The experiments conducted to evaluate the different interaction modalities involve two surgical tasks, namely incision and peg transfer. Both tasks were conducted using a teleoperated high dexterous robot. Experiment results revealed that in the incision task, touchless interfaces provide higher sense of control compared with their touch-based counterparts with statistical significance (p < 0.01). While maintaining a fixed depth during incision, Kinect and keyboard showed the least variance due to the discrete control protocol used. In the peg transfer experiment, the Omega controller led to shorter task completion times, while the fastest learning rate was found when using the Leap motion sensor.

Collaboration


Dive into the Maria Eugenia Cabrera's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge