Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan P. Wachs is active.

Publication


Featured researches published by Juan P. Wachs.


Communications of The ACM | 2011

Vision-based hand-gesture applications

Juan P. Wachs; Mathias Kölsch; Helman Stern; Yael Edan

Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what its seeing.


systems man and cybernetics | 2005

Cluster labeling and parameter estimation for the automated setup of a hand-gesture recognition system

Juan P. Wachs; Helman Stern; Yael Edan

In this work, we address the issue of reconfigurability of a hand-gesture recognition system. The calibration or setup of the operational parameters of such a system is a time-consuming effort, usually performed by trial and error, and often causing system performance to suffer because of designer impatience. In this work, we suggest a methodology using a neighborhood-search algorithm for tuning system parameters. Thus, the design of hand-gesture recognition systems is transformed into an optimization problem. To test the methodology, we address the difficult problem of simultaneous calibration of the parameters of the image processing/fuzzy C-means (FCM) components of a hand-gesture recognition system. In addition, we proffer a method for supervising the FCM algorithm using linear programming and heuristic labeling. Resulting solutions exhibited fast convergence (in the order of ten iterations) to reach recognition accuracies within several percent of the optimal. Comparative performance testing using three gesture databases (BGU, American Sign Language and Gripsee), and a real-time implementation (Tele-Gest) are reported on.


Pattern Recognition Letters | 2014

Context-based hand gesture recognition for the operating room

Mithun George Jacob; Juan P. Wachs

A sterile, intuitive context-integrated system for navigating MRIs through freehand gestures during a neurobiopsy procedure is presented. Contextual cues are used to determine the intent of the user to improve continuous gesture recognition, and the discovery and exploration of MRIs. One of the challenges in gesture interaction in the operating room is to discriminate between intentional and non-intentional gestures. This problem is also referred as spotting. In this paper, a novel method for training gesture spotting networks is presented. The continuous gesture recognition system was shown to successfully detect gestures 92.26% of the time with a reliability of 89.97%. Experimental results show that significant improvements in task completion time were obtained through the effect of context integration.


Precision Agriculture | 2010

Low and high-level visual feature-based apple detection from multi-modal images

Juan P. Wachs; Helman Stern; Thomas F. Burks; V. Alchanatis

Automated harvesting requires accurate detection and recognition of the fruit within a tree canopy in real-time in uncontrolled environments. However, occlusion, variable illumination, variable appearance and texture make this task a complex challenge. Our research discusses the development of a machine vision system, capable of recognizing occluded green apples within a tree canopy. This involves the detection of “green” apples within scenes of “green leaves”, shadow patterns, branches and other objects found in natural tree canopies. The system uses both thermal infra-red and color image modalities in order to achieve improved performance. Maximization of mutual information is used to find the optimal registration parameters between images from the two modalities. We use two approaches for apple detection based on low and high-level visual features. High-level features are global attributes captured by image processing operations, while low-level features are strong responses to primitive parts-based filters (such as Haar wavelets). These features are then applied separately to color and thermal infra-red images to detect apples from the background. These two approaches are compared and it is shown that the low-level feature-based approach is superior (74% recognition accuracy) over the high-level visual feature approach (53.16% recognition accuracy). Finally, a voting scheme is used to improve the detection results, which drops the false alarms with little effect on the recognition rate. The resulting classifiers acting independently can partially recognize the on-tree apples, however, when combined the recognition accuracy is increased.


Archive | 2006

A Real-Time Hand Gesture Interface for Medical Visualization Applications

Juan P. Wachs; Helman Stern; Yael Edan; Michael Gillam; Craig Feied; Mark Smith; Jon Handler

In this paper, we consider a vision-based system that can interpret a user’s gestures in real time to manipulate objects within a medical data visualization environment. Dynamic navigation gestures are translated to commands based on their relative positions on the screen. Static gesture poses are identified to execute non-directional commands. This is accomplished by using Haar-like features to represent the shape of the hand. These features are then input to a Fuzzy C-Means Clustering algorithm for pose classification. A probabilistic neighborhood search algorithm is employed to automatically select a small number of Haar features, and to tune the fuzzy c-means classification algorithm. The gesture recognition system was implemented in a sterile medical data-browser environment. Test results on four interface tasks showed that the use of a few Haar features with the supervised FCM yielded successful performance rates of 95 to 100%. In addition a small exploratory test of the Adaboost Haar system was made to detect a single hand gesture, and assess its suitability for hand gesture recognition.


Journal of the American Medical Informatics Association | 2013

Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

Mithun George Jacob; Juan P. Wachs; Rebecca A. Packer

This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeons behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.


International Journal of Semantic Computing | 2008

DESIGNING HAND GESTURE VOCABULARIES FOR NATURAL INTERACTION BY COMBINING PSYCHO-PHYSIOLOGICAL AND RECOGNITION FACTORS

Helman Stern; Juan P. Wachs; Yael Edan

A need exists for intuitive hand gesture machine interaction in which the machine not only recognizes gestures, but also the human feels comfortable and natural in their execution. The gesture vocabulary design problem is rigorously formulated as a multi-objective optimization problem. Psycho-physiological measures (intuitiveness, comfort) and gesture recognition accuracy are taken as the multi-objective factors. The hand gestures are static and recognized by a vision based fuzzy c-means classifier. A meta-heuristic approach decomposes the problem into two sub-problems: finding the subsets of gestures that meet a minimal accuracy requirement, and matching gestures to commands to maximize the human factors objective. The result is a set of Pareto optimal solutions in which no objective may be increased without a concomitant decrease in another. Several solutions from the Pareto set are selected by the user using prioritized objectives. Software programs are developed to automate the collection of intuitive and stress indices. The method is tested for a simulated car — maze navigation task. Validation tests were conducted to substantiate the claim that solutions that maximize intuitiveness, comfort, and recognition accuracy performance measures can be used as proxies for the minimization task time objective. Learning and memorability were also tested.


systems, man and cybernetics | 2006

Human Factors for Design of Hand Gesture Human - Machine Interaction

Helman Stern; Juan P. Wachs; Yael Edan

A global approach to hand gesture vocabulary design is proposed which includes human as well as technical design factors. The method of selecting gestures for preconceived command vocabularies has not been addressed in a systematic manner. Present methods are ad hoc. In an analytical approach technological factors of gesture recognition accuracy are easily obtained and well studied. Conversely, it is difficult to obtain measures of human centered desires (intuitiveness, comfort). These factors, being subjective, are costly and time consuming to obtain, and hence we have developed automated methods for acquisition of these data through specially designed applications. Results of the intuitiveness experiments showed when commands are presented as stimuli the gestural responses vary widely over a population of subjects. This result refutes the hypothesis that there exist universal common gestures to express user intentions or commands.


Human Factors | 2015

A User-Developed 3-D Hand Gesture Set for Human–Computer Interaction

Anna Pereira; Juan P. Wachs; Kunwoo Park; David Rempel

Objective: The purpose of this study was to develop a lexicon for 3-D hand gestures for common human–computer interaction (HCI) tasks by considering usability and effort ratings. Background: Recent technologies create an opportunity for developing a free-form 3-D hand gesture lexicon for HCI. Method: Subjects (N = 30) with prior experience using 2-D gestures on touch screens performed 3-D gestures of their choice for 34 common HCI tasks and rated their gestures on preference, match, ease, and effort. Videos of the 1,300 generated gestures were analyzed for gesture popularity, order, and response times. Gesture hand postures were rated by the authors on biomechanical risk and fatigue. Results: A final task gesture set is proposed based primarily on subjective ratings and hand posture risk. The different dimensions used for evaluating task gestures were not highly correlated and, therefore, measured different properties of the task–gesture match. Application: A method is proposed for generating a user-developed 3-D gesture lexicon for common HCIs that involves subjective ratings and a posture risk rating for minimizing arm and hand fatigue.


ieee international conference on rehabilitation robotics | 2013

Integrated vision-based robotic arm interface for operators with upper limb mobility impairments

Hairong Jiang; Juan P. Wachs; Bradley S. Duerstock

An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures to send as commands to control the WMRM and locate the operators face for object positioning. The other sensor was used to automatically recognize different daily living objects for test subjects to select. The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon. An object recognition module employing Speeded Up Robust Features (SURF) algorithm was performed and recognition results were sent as a command for “coarse positioning” of the robotic arm near the selected daily living object. Automatic face detection was also provided as a shortcut for the subjects to position the objects to the face by using a WMRM. Completion time tasks were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection and object recognition) WMRM control modes. The use of automatic face and object detection significantly increased the completion times for retrieving a variety of daily living objects.

Collaboration


Dive into the Juan P. Wachs's collaboration.

Top Co-Authors

Avatar

Helman Stern

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yael Edan

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mathias Kölsch

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge