Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard A. Foulds is active.

Publication


Featured researches published by Richard A. Foulds.


international conference on automatic face and gesture recognition | 1996

Toward robust skin identification in video images

David M. Saxe; Richard A. Foulds

There are many applications where it is desirable to segment a video image into regions defined by color. Among these are the recognition of gesture from the image (as opposed to instrumented gloves), facial expression and orientation, and video teleconferencing. In these examples, the important elements of the images are human hands and face, which share common skin coloration of the subject. This paper describes an approach to the identification of skin-colored regions of the image that is robust in terms of variations in skin pigmentation in a single subject, differences in skin pigmentation across a population of potential users, and subject clothing and image background. The paper also discusses the potential for being robust over a wide range of illuminating conditions.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2004

Biomechanical and perceptual constraints on the bandwidth requirements of sign language

Richard A. Foulds

Access to telecommunication systems by deaf users of sign language can be greatly enhanced with the incorporation of video conferencing in addition to text-based adaptations. However, the communication channel bandwidth is often challenged by the spatial requirements to represent the image in each frame and temporal demands to preserve the movement trajectory with a sufficiently high frame rate. Effective systems must balance the portion of a limited channel bandwidth devoted to the quality of the individual frames and the frame rate in order to meet their intended needs. Conventional video conferencing technology generally addresses the limitations of channel capacity by drastically reducing the frame rate, while preserving image quality. This produces a jerky image that disturbs the trajectories of the hands and arms, which are essential in sign language. In contrast, a sign language communication system must provide a frame rate that is capable of representing the kinematic bandwidth of human movement. Prototype sign language communication systems often attempt to maintain a high frame rate by reducing the quality of the image with lossy spatial compression. Unfortunately, this still requires a combined spatial and temporal data rate, which exceeds the limited channel of residential and wireless telephony. While spatial compression techniques have been effective in reducing the data, there has been no comparable compression of sign language in the temporal domain. Even modest reductions in the frame rate introduce perceptually disturbing flicker that decreases intelligibility. This paper introduces a method through which temporal compression on the order of 5 : 1 can be achieved. This is accomplished by decoupling the biomechanical or kinematic bandwidth necessary to represent continuous movements in sign language from the perceptually determined critical flicker frequency.


northeast bioengineering conference | 2003

American Sign Language finger spelling recognition system

J.M. Allen; P.K. Asselin; Richard A. Foulds

An American Sign Language (ASL) finger spelling recognition system was designed and constructed in order to translate the ASL alphabet into the corresponding printed and spoken English letters. This endeavor required the use of abduction and bend-sensing technology to accurately transform hand and finger positions into real-time digital joint-angle data. Computer algorithms were developed in Matlab and Labview to recognize the finger spelled letters and translate them into English letters.


international conference of the ieee engineering in medicine and biology society | 2002

Robust region of interest coding for improved sign language telecommunication

David M. Saxe; Richard A. Foulds

More than 500,000 deaf people in North America use American Sign Language or a similar signed system as a first language. Long-distance communication of this visually based medium is hampered by its incompatibility with audio and text telecommunication systems. Movements associated with signed languages require a more consistent and higher frame rate than is available with residential video telephony. New video compression standards (JPEG 2000 and MPEG-4) allow optional region of interest coding in which areas within a frame can be assigned different levels of compression. This paper presents a novel skin color segmentation approach that identifies the hands and face each video frame. This method is robust in terms of variations in skin pigmentation in a single subject, in skin pigmentation across a population of potential users, subject clothing, and image background. Specifying these critical regions of interest to the compression algorithm maintains high visual quality in the regions of the hands and face, while allowing very lossy, high compression of the remainder of the video frame. This reduces the coded representation of each frame, and offers a potential increase in the frame rate for telecommunication.


international conference on pattern recognition | 1996

Recognition approach to gesture language understanding

Roman Erenshteyn; Pavel Laskov; Richard A. Foulds; Lynn Messing; Garland Stern

We explore recognition implications of understanding gesture communication, having chosen American sign language as an example of a gesture language. An instrumented glove and specially developed software have been used for data collection and labeling. We address the problem of recognizing dynamic signing, i.e. signing performed at natural speed. Two neural network architectures have been used for recognition of different types of finger-spelled sentences. Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify. As a possible solution to coarticulation problem two post-processing algorithms for temporal segmentation are proposed and experimentally evaluated.


human factors in computing systems | 1994

Gestural human-machine interaction for people with severe speech and motor impairment due to cerebral palsy

David M. Roy; Marilyn Panayi; Roman Erenshteyn; Richard A. Foulds; Robert Fawcus

The objective of the research is to develop a new method of human-machine interaction that reflects and harnesses the abilities of people with severe speech and motor impairment due to cerebral palsy (SSMICP). Human-human interaction within the framework of drama and mime was used to elicit 120 gestures from twelve students with SSMICP. 27 dynamic arm gestures were monitored using biomechanical and bioelecrnc sensors. Neural networks are being used to analyze the data and to realize the gestuml human-machine interface. Preliminary results show that two visually similar gestures can be differentiated by neural networks. INTRODUCTION People with severe speech and motor impairment due to cerebral palsy have great difficulty interacting with their environment. Computem have much to offer people with dkability, but the standard human-machine interface (HMI) (e.g. keyboard and mouse) is inaccessible to this population [3][6]. Existing adaptive technology attempts to overcome imprecision of movement in time and space through the use of switches that have a large wget area. However, this solution still requires the user to target, which is a particularly difficult motoric act for this population. APPROACH The research examined alternative forms of intentional behavior during human-human interaction that could be harnessed for human-machine interaction. However, observations alone gave little indication of the range and number of intentional behaviors that could be performed by the students. To explore the students’ potential, drama and mime was used to create an environment where intentional behaviors could be easily elicited. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee andlor specific permission. CH194 Companion-4194 Boston, Massachusetts USA e 1994 ACM 0-89781 -651 -4/94 /0313 . ..


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2004

Neuromuscular modeling of spasticity in cerebral palsy

James W. Fee; Richard A. Foulds

3.50 Figure 1: Student performing dynamic arm gestures RESULTS Observations Fourteen students with cerebral palsy aged 5-17 were observed during their regular school schedules, The observations supported the work of others [2][7] in that there were many communicative acts that did not involve r-he use of their assistive device (e.g. switch operated speech synthesizer) and that these acts were highly multi-modal involving a combination of facial expression, eye-gaze, vocalization, dywrthric speech and upper-extremity gestures including the head, arm, hand and upper torso. Drama and Mime Framework Each student participated in a gesmre elicitation session where the student was asked to produce a mime in response to words or phrases from a pool of 120. Examples were icecream, ironing, violin, spank, shake hands, stroke the ca~ helicopter, heavy weigh~ light feather, play the drums, mosquito bite and rainbow [7]. Minimal prompting in the form of clues were necessary from either the therapist or the investigator. The ease of elicitation and consistency of concept over time suggestcxi that existing kinesthetic abilities were being harnessed involving a low cognitive load. Mimes were spontaneously enacted often with a sophisticated and creative appreciation of movement in time and space. The students were able to convey concepts for weight, emotion, character formaticm and object visualization.


Pediatric Physical Therapy | 2013

Effects of passive versus dynamic loading interventions on bone health in children who are nonambulatory.

Megan Damcott; Sheila Blochlinger; Richard A. Foulds

Data from the pendulum knee test has been used to develop two active models that use external torques to closely match the experimental knee trajectories of subjects with spasticity due to cerebral palsy. These data were collected from three subjects who are identical triplets; two of whom have clinically measurable spasticity. A passive model that accurately describes the knee trajectory of the nonspastic subject serves as the passive plant for two active models. One of these models allows direct application of external torques, and the second provides additional torque as the result of velocity feedback. Both active models and the passive model use separate parameters of stiffness and damping for the agonist and antagonist muscles.


southeastcon | 1996

Gesture-speech based HMI for a rehabilitation robot

Shoupu Chen; Zunaid Kazi; Matthew Beitler; Marcos Salganicoff; Daniel L. Chester; Richard A. Foulds

Purpose: To investigate the effectiveness of a novel dynamic standing intervention compared with a conventional passive standing intervention on bone health in children with cerebral palsy who are nonambulatory. Methods: Four children in passive standers and 5 in dynamic standers were followed for 15 months (standing 30 min/d, 5 d/wk). Dual-energy x-ray absorptiometry scans of the distal femur were obtained at 3-month intervals to measure changes in bone mineral density (BMD), bone mineral content, and area. Results: Increases in BMD were observed during dynamic standing (P < .001), whereas passive standing appeared to maintain the baseline BMD. Increases in bone mineral content were observed in each standing intervention (P < .001), with dynamic standing inducing greater increases. Increases in area were comparable between interventions (P = .315). Conclusions: Dynamic standing demonstrated the potential of moderate-magnitude, low-frequency loading to increase cortical BMD. Further investigations could provide insight into the mechanisms of bone health induced through loading interventions.


Lecture Notes in Computer Science | 1998

Speech and Gesture Mediated Intelligent Teleoperation

Zunaid Kazi; Shoupu Chen; Matthew Beitler; Daniel L. Chester; Richard A. Foulds

One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the users burden of operating a robot manipulator while utilizing the users intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.

Collaboration


Dive into the Richard A. Foulds's collaboration.

Top Co-Authors

Avatar

Zunaid Kazi

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ghaith J. Androwis

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shoupu Chen

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Chen

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter A. Michael

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sergei V. Adamovich

New Jersey Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge