Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amin Atrash is active.

Publication


Featured researches published by Amin Atrash.


international conference on pervasive computing | 2004

Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers

Paul Lukowicz; Jamie A. Ward; Holger Junker; Mathias Stäger; Gerhard Tröster; Amin Atrash; Thad Starner

The paper presents a technique to automatically track the progress of maintenance or assembly tasks using body worn sensors. The technique is based on a novel way of combining data from accelerometers with simple frequency matching sound classification. This includes the intensity analysis of signals from microphones at different body locations to correlate environmental sounds with user activity. To evaluate our method we apply it to activities in a wood shop. On a simulated assembly task our system can successfully segment and identify most shop activities in a continuous data stream with zero false positives and 84.4% accuracy.


international conference on multimodal interfaces | 2003

Georgia tech gesture toolkit: supporting experiments in gesture recognition

Tracy L. Westeyn; Helene Brashear; Amin Atrash; Thad Starner

Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT2k which leverages Cambridge Universitys speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT2k provides capabilities for training models and allows for both real--time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.


Ai Magazine | 2003

GRACE: an autonomous robot for the AAAI Robot challenge

Reid G. Simmons; Dani Goldberg; Adam Goode; Michael Montemerlo; Nicholas Roy; Brennan Sellner; Chris Urmson; Alan C. Schultz; Myriam Abramson; William Adams; Amin Atrash; Magdalena D. Bugajska; Michael J. Coblenz; Matt MacMahon; Dennis Perzanowski; Ian Horswill; Robert Zubek; David Kortenkamp; Bryn Wolfe; Tod Milam; Bruce Allen Maxwell

In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry, and government integrated their research into a single robot named GRACE. This article describes this first-year effort by the GRACE team, including not only the various techniques each participant brought to GRACE but also the difficult integration effort itself.


International Journal of Social Robotics | 2013

Automated Proxemic Feature Extraction and Behavior Recognition: Applications in Human-Robot Interaction

Ross Mead; Amin Atrash; Maja J. Matarić

In this work, we discuss a set of feature representations for analyzing human spatial behavior (proxemics) motivated by metrics used in the social sciences. Specifically, we consider individual, physical, and psychophysical factors that contribute to social spacing. We demonstrate the feasibility of autonomous real-time annotation of these proxemic features during a social interaction between two people and a humanoid robot in the presence of a visual obstruction (a physical barrier). We then use two different feature representations—physical and psychophysical—to train Hidden Markov Models (HMMs) to recognize spatiotemporal behaviors that signify transitions into (initiation) and out of (termination) a social interaction. We demonstrate that the HMMs trained on psychophysical features, which encode the sensory experience of each interacting agent, outperform those trained on physical features, which only encode spatial relationships. These results suggest a more powerful representation of proxemic behavior with particular implications in autonomous socially interactive and socially assistive robotics.


robot and human interactive communication | 2014

Graded cueing feedback in robot-mediated imitation practice for children with autism spectrum disorders

Jillian Greczek; Edward Kaszubski; Amin Atrash; Maja J. Matarić

We performed a study that examined the effects of a humanoid robot giving the minimum required feedback - graded cueing - during a one-on-one imitation game played children with autism spectrum disorders (ASD). 12 high-functioning participants with ASD, ages 7 to 10, each played “Copy-Cat” with a Nao robot 5 times over the span of 2.5 weeks. While the graded cueing model was not exercised in its fullest, using graded cueing-style feedback resulted in a nondecreasing trend in imitative accuracy when compared to a non-adaptive condition, where participants always received the same, most descriptive feedback whenever they made a mistake. These trends show promise for future work with robots encouraging autonomy in special needs populations.


Journal of Neuroengineering and Rehabilitation | 2013

Design and validation of an intelligent wheelchair towards a clinically-functional outcome

Patrice Boucher; Amin Atrash; Wormser Honoré; Hai Nguyen; Julien Villemure; François Routhier; Paul Cohen; Louise Demers; Robert Forget; Joelle Pineau

BackgroundMany people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW.MethodsThe main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance.ResultsUser tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode.ConclusionsThe platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.


International Journal of Social Robotics | 2009

Development and Validation of a Robust Speech Interface for Improved Human-Robot Interaction

Amin Atrash; Robert Kaplow; Julien Villemure; Robert West; Hiba Yamani; Joelle Pineau

Robotics technology has made progress on a number of important issues in the last decade. However many challenges remain when it comes to the development of systems for human-robot interaction. This paper presents a case study featuring a robust dialogue interface for human-robot communication onboard an intelligent wheelchair. Underlying this interface is a sophisticated software architecture which allows the chair to perform real-time, robust tracking of the dialogue state, as well as select appropriate responses using rich probabilistic representations. The paper also examines the question of rigorous validation of complex human-robot interfaces by evaluating the proposed interface in the context of a standardized rehabilitation task domain.


human-robot interaction | 2011

Recognition of spatial dynamics for predicting social interaction

Ross Mead; Amin Atrash; Maja J. Matarić

We present a user study and dataset designed and collected to analyze how humans use space in face-to-face interactions. In a proof-of-concept investigation into human spatial dynamics, a Hidden Markov Model (HMM) was trained over a subset of features to recognize each of three interaction cues-initiation, acceptance, and termination-in both dyadic and triadic scenarios; these cues are useful in predicting transitions into, during, and out of multi-party social encounters. It is shown that the HMM approach performed twice as well as a weighted random classifier, supporting the feasibility of recognizing and predicting social behavior based on spatial features.


international conference on social robotics | 2011

Proxemic feature recognition for interactive robots: automating metrics from the social sciences

Ross Mead; Amin Atrash; Maja J. Matarić

In this work, we discuss a set of metrics for analyzing human spatial behavior (proxemics) motivated by work in the social sciences. Specifically, we investigate individual, attentional, interpersonal, and physiological factors that contribute to social spacing. We demonstrate the feasibility of autonomous real-time annotation of these spatial features during multi-person social encounters. We utilize sensor suites that are non-invasive to participants, are readily deployable in a variety of environments (ranging from an instrumented workspace to a mobile robot platform), and do not interfere with the social interaction itself. Finally, we provide a discussion of the impact of these metrics and their utility in autonomous socially interactive systems.


international conference on robotics and automation | 2010

Variable resolution decomposition for robotic navigation under a POMDP framework

Robert Kaplow; Amin Atrash; Joelle Pineau

Partially Observable Markov Decision Processes (POMDPs) offer a powerful mathematical framework for making optimal action choices in noisy and/or uncertain environments, in particular, allowing us to merge localization and decision-making for mobile robots. While advancements in POMDP techniques have allowed the use of much larger models, POMDPs for robot navigation are still limited by large state space requirements for even small maps. In this work, we propose a method to automatically generate a POMDP representation of an environment. By using variable resolution decomposition techniques, we can take advantage of characteristics of the environment to minimize the number of states required, while maintaining the level of detail required to find a robust and efficient policy. This is accomplished by automatically adjusting the level of detail required for planning at a given region, with few states representing large open areas, and many smaller states near objects. We validate this algorithm in POMDP simulations, a robot simulator as well as an autonomous robot.

Collaboration


Dive into the Amin Atrash's collaboration.

Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ross Mead

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis Perzanowski

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jillian Greczek

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Matt MacMahon

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Myriam Abramson

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge