Stephen G. Lambacher
Aoyama Gakuin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen G. Lambacher.
intelligent robots and systems | 2009
Ravindra S. De Silva; Katsunori Tadano; Mastake Higashi; Azusa Saito; Stephen G. Lambacher
In this paper, we propose a therapeutic-assisted robot for children with autism to ameliorate their skill of joint attention. The robot conducts a goal-directed based interaction to establish engagement between the child and robot in order to establish a beneficial learning environment for autistic children. An unsupervised Mixture Gaussian-based cluster method is proposed to detect the childs intention in real time to process the goal-directed task smoothly. The novelty of this approach is that does not require the use of any training data or a trained model to detect the childs intention. Our autonomous robotic system is tested with several autistic children at a School for the Disabled in Nagoya, Japan. The results of the initial interaction showed that the children enjoyed interaction with and feedback from the robot, which confirmed that the robot can be used as mediator or an object of joint attention. The unsupervised approach was able to detect the childrens intention at every time segment to process the goal-directed task with a higher accuracy rate. The results of the goal-directed task showed that the proposed interaction was highly effective in enhancing their joint attention. Since most of the children attempted to imitate the robots gestural behaviors and used a variety of learning patterns to attend to the robots fingered object in the environment to obtain joint attention with robot.
Archive | 2010
P. Ravindra; S. De Silva; Tohru Matsumoto; Stephen G. Lambacher; Ajith P. Madurapperuma; Susantha Herath; Masatake Higashi
At present, the inclination of robotic researchers is to develop social robots for a variety of application domains. Socially intelligent robots are capable of having natural interaction with a human by engaging in complex social functions. The challengeable issue is to transfer these social functions into a robot. This requires the development of computation modalities with intelligent and autonomous capabilities for reacting to a human partner within different contexts. More importantly, a robot needs to interact with a human partner through human-trusted social cues which create the interface for natural communication. To execute the above goals, robotic researchers have proposed a variety of concepts that are biologically-inspired and based on other theoretical concepts related to psychology and cognitive science. Recent robotic research has been able to achieve the transference of social behaviors into a robot through imitation-based learning (Ito et al., 2007) (Takano & Nakamura, 2006), and the related learning algorithms have helped in acquiring a variety of natural social cues. The acquired social behaviors have emphasized equipping robots with natural and trusted human interactions, which can be used to develop a wide range of robotic applications (Tapus et al., 2007). The transference of a variety of skills into a robot involves several diminutive and imperative processes: the need for efficient media for gathering human motion precisely, the elicitation of key characteristic of motion, a generic approach to generate robot motion through the key characteristics of motion, and the need for an approach to evaluate generated robot motions or skills. The use of media for amassing human motions has become a crucial factor that is very important for attaining an agents motion within deficit noisy data. Current imitation research has explored ways of simulating accurate human motions for robot imitations through a motion capture system (Calinon & Billard, 2007(a)) or through image processing techniques (Riley et al., 2003). A motion capture system provides accurate data that is quieter than image processing techniques (Calinon & Billard, 2007(b)).
international conference on automation, robotics and applications | 2000
P. Ravindra; S. De Silva; Katsunori Tadano; Stephen G. Lambacher; Susantha Herath; Masatake Higashi
Existing approaches to joint attention of robots have considered an objects information (location and visual information) with a head pose of caregiver using data from many subjects to train the robot joint attention model. These approaches have used simulated data (of the object) to train the robot joint attention model, and they are incapable of accurately predicting a caregivers attention when the caregiver has a complex eye gaze pattern. A complex eye gaze pattern can be defined as a caregiver going over the number of objects in the environment and finally attending to the object of interest. At this time it is possible for us to obtain a long sequence of eye gaze data using a combination of different eye gaze patterns. Our approach segments the eye gaze data and applies a Mixture Gaussian-based unsupervised cluster to detect the caregivers intention at each of the time segmentations. Finally, the above attention information is combined with a geometrical model of objects to detect the caregivers object of interest by considering the entire eye gaze segmentations. The novelty of our approach is to detect the caregivers object of interest when the caregiver has a complex eye gaze pattern and does not even use any of the training data. The experimental results revealed that when the objects distance is 20cm, our proposed approach can accurately recognize and impressive 80% of the caregivers interested objects. The contrivance of the time segmentation is manipulated to infer a caregivers attention plans and behaviors in each time interval. It is directed to detect the caregiver interested object for acquiring the skills of joint attention.
international conference on intelligent sensors, sensor networks and information processing | 2008
P. Ravindra; S. De Silva; Tohru Matsumoto; Stephen G. Lambacher; Masatake Higashi
A main purpose of humanoid robotic research is to develop a socially interactive robot by providing for a certain degree adaptability and flexibility in order to endow the robot with natural interactions with humans. In this paper, a social learning mechanism is proposed for enabling a humanoid robot to learn social behaviors through imitation. To achieve this goal, a novel imitation algorithm is proposed for transferring human social behaviors into a robot in real time. This approach considers the characteristic of motions for extracting symbolic postures, which consists of changing the points of motion directions. Reinforcement learning is utilized for extracting optimal symbolic postures and for incorporating the divisional cubic spline interpolation for generating a robotpsilas social behaviors through symbolic postures. In our experiment, we attempt to transfer three social cues: a ldquopointing gesture,rdquo a gesture for ldquoexplaining something attractively,rdquo and a gesture for expressing ldquoI donpsilat know.rdquo The experimental results confirmed the accuracy of the robot motion generation through the proposed mechanism for transferring natural social behaviors.
CALL Design: Principles and Practice - Proceedings of the 2014 EUROCALL Conference, Groningen, The Netherlands | 2014
Hiroyuki Obari; Stephen G. Lambacher
Critical CALL – Proceedings of the 2015 EUROCALL Conference, Padova, Italy | 2015
Hiroyuki Obari; Stephen G. Lambacher
CALL Design: Principles and Practice - Proceedings of the 2014 EUROCALL Conference, Groningen, The Netherlands | 2014
James W. Pagel; Stephen G. Lambacher
ieee international conference on rehabilitation robotics | 2009
P. Ravindra S. De Silva; Katsunori Tadano; Azusa Saito; Stephen G. Lambacher; Mastake Higashi
kansei Engineering International | 2006
P. Ravindra De Silva; Ajith P. Madurapperuma; Stephen G. Lambacher; Minetada Osano
Critical CALL – Proceedings of the 2015 EUROCALL Conference, Padova, Italy | 2015
James W. Pagel; Stephen G. Lambacher; David W. Reedy