Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Heath is active.

Publication


Featured researches published by Scott Heath.


Autonomous Robots | 2013

OpenRatSLAM: an open source brain-based SLAM system

David Ball; Scott Heath; Janet Wiles; Gordon Wyeth; Peter Corke; Michael Milford

RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.


international conference on robotics and automation | 2013

Communication between Lingodroids with different cognitive capabilities

Scott Heath; David Ball; Ruth Schulz; Janet Wiles

Previous studies have shown how Lingodroids, language learning mobile robots, learn terms for space and time, connecting their personal maps of the world to a publically shared language. One caveat of previous studies was that the robots shared the same cognitive architecture, identical in all respects from sensors to mapping systems. In this paper we investigate the question of how terms for space can be developed between robots that have fundamentally different sensors and spatial representations. In the real world, communication needs to occur between agents that have different embodiment and cognitive capabilities, including different sensors, different representations of the world, and different species (including humans). The novel aspects of these studies is that one robot uses a forward facing camera to estimate appearance and uses a biologically inspired continuous attractor network to generate a topological map; the other robot uses a laser scanner to estimate range and uses a probabilistic filter approach to generate an occupancy grid. The robots hold conversations in different locations to establish a shared language. Despite their different ways of sensing and mapping the world, the robots are able to create coherent lexicons for the space around them.


human robot interaction | 2016

Hand in Hand: Tools and techniques for understanding children's touch with a social robot

Kristyn Hensby; Janet Wiles; Marie Boden; Scott Heath; Mark Nielsen; Paul E. I. Pounds; Joshua Riddell; Kristopher Rogers; Nikodem Rybak; Virginia Slaughter; M. F. Smith; Jonathon Taufatofua; Peter Worthy; Jason Weigel

Robots that facilitate touch by children have special requirements in terms of safety and robustness, but little is known about how and when children actually use touch with robots. Tools and techniques are required to sense the variety of childrens touch and to interpret the volumes of data generated. This explorative user study investigated childrens patterns of touch during game play with a robot. We examined where the children touch the robot and their patterns of touch over time, using a raster-based visualisation of each childs time series of touches, recording patterns of touch across different games and children. We found that children readily engage with the robot, in particular spontaneously touching the robots hands more than any other area. This user study and the tools developed may aid future designs of robots to autonomously detect when they have been touched.


human robot interaction | 2016

Discovering Patterns of Touch: A Case Study for Visualization-Driven Analysis in Human-Robot Interaction

Kristopher Rogers; Janet Wiles; Scott Heath; Kristyn Hensby; Jonathon Taufatofua

An important challenge in Human-Robot Interaction (HRI) is the analysis and interpretation of large volumes of data to inform the design process within a complex, multi-faceted research space. In this study, we explore how data visualization techniques can contribute to HRI methodology, particularly in terms of linking qualitative and quantitative analysis methods. Specifically, we present a case study demonstrating the visualization of touch data to identify potential patterns of interaction with a social robot intended for deployment in preschool classrooms.


IEEE Transactions on Cognitive and Developmental Systems | 2016

Lingodroids: Cross-Situational Learning for Episodic Elements

Scott Heath; David Ball; Janet Wiles

For robots to effectively bootstrap the acquisition of language, they must handle referential uncertainty-the problem of deciding what meaning to ascribe to a given word. Typically when socially grounding terms for space and time, the underlying sensor or representation was specified within the grammar of a conversation, which constrained language learning to words for innate features. In this paper, we demonstrate that cross-situational learning resolves the issues of referential uncertainty for bootstrapping a language for episodic space and time; therefore removing the need to specify the underlying sensors or representations a priori. The requirements for robots to be able to link words to their designated meanings are presented and analyzed within the Lingodroids-language learning robots-framework. We present a study that compares predetermined associations given a priori against unconstrained learning using cross-situational learning. This study investigates the long-term coherence, immediate usability and learning time for each condition. Results demonstrate that for unconstrained learning, the long-term coherence is unaffected, though at the cost of increased learning time and hence decreased immediate usability.


australasian computer-human interaction conference | 2015

Children's Expectations and Strategies in Interacting with a Wizard of Oz Robot

Peter Worthy; Marie Boden; Arafeh Karimi; Jason Weigel; Ben Matthews; Kristyn Hensby; Scott Heath; Paul E. I. Pounds; Jonathon Taufatofua; M. F. Smith; Stephen Viller; Janet Wiles

This paper presents an analysis of childrens interactions with an early prototype of a robot that is being designed for deployment in early learning centres. 23 children aged 2-6 interacted with the prototype, consisting of a pair of tablets embedded in a flat and vaguely humanoid form. We used a Wizard of Oz (WoZ) technique to control a synthesized voice that delivered predefined statements and questions, and a tablet mounted as a head that displayed animated eyes. The childrens interactions with the robot and with the adult experimenter were video recorded and analysed in order to identify some of the childrens expectations of the robots behaviour and capabilities, and to observe their strategies for interacting with a speaking and minimally animated artificial agent. We found a surprising breadth in childrens reactions, expectations and strategies (as evidenced by their behaviour) and a noteworthy tolerance for the robots occasionally awkward behaviour.


international conference on development and learning | 2012

Rat meets iRat

Janet Wiles; Scott Heath; David Ball; Laleh K. Quinn; Andrea A. Chiba

Biorobotics has the potential to provide an integrated understanding from neural systems to behavior that is neither ethical nor technically feasible with living systems. Robots that can interact with animals in their natural environment open new possibilities for empirical studies in neuroscience. However, designing a robot that can interact with a rodent requires considerations that span a range of disciplines. For the rats safety, the body form and movements of the robot need to take into consideration the safety of the animal, an appropriate size for the rodent arenas, and behaviors for interaction. For the robots safety, its form must be robust in the face of typically inquisitive and potentially aggressive behaviors by the rodent, which can include chewing on exposed parts, including electronics, and deliberate or accidental fouling. We designed a rat-sized robot, the iRat (intelligent rat animat technology) for studies in neuroscience. The iRat is about the same size as a rat and has the ability to navigate autonomously around small environments. In this study we report the first interactions between the iRat and real rodents in a free exploration task. Studies with five rats show that the rats and iRat interact safely for both parties.


human robot interaction | 2016

Social Cardboard: Pretotyping a Social Ethnodroid in the Wild

Janet Wiles; Peter Worthy; Kristyn Hensby; Marie Boden; Scott Heath; Paul E. I. Pounds; Nikodem Rybak; M. F. Smith; Jonathon Taufotofua; Jason Weigel

Pretotyping is a set of techniques, tools, and metrics for gauging the interest in a product, prior to full-scale development [1]. This late breaking report describes a pretotyping case study of an ethnodroid - a robot that functions as an ethnographer - intended to engage with young children and record their learning progress. The central requirement for the project is that the robot will be able to interact socially with children aged 1-6 years in tablet-based tasks. We developed a simple robot made of MDF (thick cardboard), added tablets for the face and torso, and controlled a scripted interaction using Wizard of Oz (WoZ). Childrens engagement with the robot was tested in an early learning centre which provided a relatively structured environment (“in the lab”) and at a science fair which provided a relatively unconstrained setting (“in the wild”). The rapid testing revealed distinct effects in the childrens attitudes and behaviors in the two user contexts and provided insights into form, sensors and analyses for the design process.


Frontiers in Robotics and AI | 2017

Spatiotemporal Aspects of Engagement during Dialogic Storytelling Child–Robot Interaction

Scott Heath; Gautier Durantin; Marie Boden; Kristyn Hensby; Jonathon Taufatofua; Ola Olsson; Jason Weigel; Paul E. I. Pounds; Janet Wiles

The success of robotic agents in close proximity of humans depends on their capacity to engage in social interactions and maintain these interactions over periods of time that are suitable for learning. A critical requirement is the ability to modify the behaviour of the robot contingently to the attentional and social cues signalled by the human. A benchmark challenge for an engaging social robot is that of storytelling. In this paper, we present an exploratory study to investigate dialogic storytelling -- storytelling with contingent responses -- using a child-friendly robot. The aim of the study was to develop an engaging storytelling robot and to develop metrics for evaluating engagement. Ten children listened to an illustrated story told by a social robot during a science fair. The responses of the robot were adapted during the interaction based on the childrens engagement and touches of the pictures displayed by the robot on a tablet embedded in its torso. During the interaction the robot responded contingently to the child, but only when the robot invited the child to interact. We describe the robot architecture used to implement dialogic storytelling and evaluate the quality of human-robot interaction based on temporal (patterns of touch, touch duration) and spatial (motions in the space surrounding the robot) metrics. We introduce a novel visualization that emphasizes the temporal dynamics of the interaction, and analyse the motions of the children in the space surrounding the robot. The study demonstrates that the interaction through invited contingent responses succeeded in engaging children, although the robot missed some opportunities for contingent interaction and the children had to adapt to the task. We conclude that i) the consideration of both temporal and spatial attributes is fundamental for establishing metrics to estimate levels of engagement in real-time, ii) metrics for engagement are sensitive to both the group and individual, and iii) a robots sequential mode of interaction can facilitate engagement, despite some social events being ignored by the robot.


Frontiers in Robotics and AI | 2017

Social Moments: A Perspective on Interaction for Social Robotics

Gautier Durantin; Scott Heath; Janet Wiles

During a social interaction, events that happen at different timescales can indicate social meanings. In order to socially engage with humans, robots will need to be able to comprehend and manipulate the social meanings that are associated with these events. We define social moments as events that occur within a social interaction and which can signify a pragmatic or semantic meaning. A challenge for social robots is recognizing social moments that occur on short timescales, which can be on the order of 10^2ms. In this perspective, we propose that understanding the range and roles of social moments in social interaction and implementing social micro-abilities -- the abilities required to engage in a timely manner through social moments - is a key challenge for the field of human robot interaction (HRI) to enable effective social interactions and social robots. In particular, it is an open question how social moments can acquire their associated meanings. Practically, the implementation of these social micro-abilities presents engineering challenges for the fields of HRI and social robotics including performing processing of sensors and using actuators to meet fast timescales. We present a key challenge of social moments as integration of social stimuli across multiple timescales and modalities. We present the neural basis for human comprehension of social moments, and review current literature related to social moments and social micro-abilities. We discuss the requirements for social micro-abilities, how these abilities can enable more natural social robots, and how to address the engineering challenges associated with social moments.

Collaboration


Dive into the Scott Heath's collaboration.

Top Co-Authors

Avatar

Janet Wiles

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

David Ball

Peter MacCallum Cancer Centre

View shared research outputs
Top Co-Authors

Avatar

Kristyn Hensby

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Jason Weigel

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marie Boden

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ruth Schulz

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. F. Smith

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge