Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruth Schulz is active.

Publication


Featured researches published by Ruth Schulz.


Robotics and Autonomous Systems | 2007

Learning spatial concepts from RatSLAM representations

Michael Milford; Ruth Schulz; David Prasser; Gordon Wyeth; Janet Wiles

RatSLAM is a biologically-inspired visual SLAM and navigation system that has been shown to be effective indoors and outdoors on real robots. The spatial representation at the core of RatSLAM, the experience map, forms in a distributed fashion as the robot learns the environment. The activity in RatSLAM’s experience map possesses some geometric properties, but still does not represent the world in a human readable form. A new system, dubbed RatChat, has been introduced to enable meaningful communication with the robot. The intention is to use the “language games” paradigm to build spatial concepts that can be used as the basis for communication. This paper describes the first step in the language game experiments, showing the potential for meaningful categorization of the spatial representations in RatSLAM.


Adaptive Behavior | 2011

Lingodroids: socially grounding place names in privately grounded cognitive maps

Ruth Schulz; Gordon Wyeth; Janet Wiles

For mobile robots to communicate meaningfully about their spatial environment, they require personally constructed cognitive maps and social interactions to form languages with shared meanings. Geographic spatial concepts introduce particular problems for grounding—connecting a word to its referent in the world—because such concepts cannot be directly and solely based on sensory perceptions. In this article we investigate the grounding of geographic spatial concepts using mobile robots with cognitive maps, called Lingodroids. Languages were established through structured interactions between pairs of robots called where-are-we conversations. The robots used a novel method, termed the distributed lexicon table, to create flexible concepts. This method enabled words for locations, termed toponyms, to be grounded through experience. Their understanding of the meaning of words was demonstrated using go-to games in which the robots independently navigated to named locations. Studies in real and virtual reality worlds show that the system is effective at learning spatial language: robots learn words easily—in a single trial as children do—and the words and their meaning are sufficiently robust for use in real world tasks.


international conference on robotics and automation | 2016

Place categorization and semantic mapping on a mobile robot

Niko Sünderhauf; Feras Dayoub; Sean McMahon; Ben Talbot; Ruth Schulz; Peter Corke; Gordon Wyeth; Ben Upcroft; Michael Milford

In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot without environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robots behaviour during navigation tasks. The system is made available to the community as a ROS module.


international conference on robotics and automation | 2011

Lingodroids: Studies in spatial cognition and language

Ruth Schulz; Arren Glover; Michael Milford; Gordon Wyeth; Janet Wiles

The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places - even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.


IEEE Transactions on Autonomous Mental Development | 2011

Are We There Yet? Grounding Temporal Concepts in Shared Journeys

Ruth Schulz; Gordon Wyeth; Janet Wiles

An understanding of time and temporal concepts is critical for interacting with the world and with other agents in the world. What does a robot need to know to refer to the temporal aspects of events-could a robot gain a grounded understanding of “a long journey,” or “soon?” Cognitive maps constructed by individual agents from their own journey experiences have been used for grounding spatial concepts in robot languages. In this paper, we test whether a similar methodology can be applied to learning temporal concepts and an associated lexicon to answer the question “how long” did it take to complete a journey. Using evolutionary language games for specific and generic journeys, successful communication was established for concepts based on representations of time, distance, and amount of change. The studies demonstrate that a lexicon for journey duration can be grounded using a variety of concepts. Spatial and temporal terms are not identical, but the studies show that both can be learned using similar language evolution methods, and that time, distance, and change can serve as proxies for each other under noisy conditions. Effective concepts and names for duration provide a first step towards a grounded lexicon for temporal interval logic.


The 7th International Conference of the Evolution of Language (EVOLANG7) | 2008

The formation, generative power, and evolution of toponyms: Grounding a spatial vocabulary in a cognitive map

Ruth Schulz; David Prasser; Paul Stockwell; Gordon Wyeth; Janet Wiles

We present a series of studies investigating the formation, generative power, and evolution of toponyms (i.e. topographic names). The domain chosen for this project is the spatial concepts related to places in an environment, one of the key sets of concepts to be grounded in autonomous agents. Concepts for places cannot be directly perceived as they require knowledge of relationships between locations in space, with representations inferred from ambiguous sensory data acquired through exploration. A generative toponymic language game has been developed to allow the agents to interact, forming concepts for locations and spatial relations. The studies demonstrate how a grounded generative toponymic language can form and evolve in a population of agents interacting through language games. Initially, terms are grounded in simple spatial concepts directly experienced by the agents. A generative process then enables the agents to learn about and refer to locations beyond their direct experience, enabling concepts and toponyms to co-evolve. The significance of this research is the demonstration of grounding for both experienced and novel concepts, using a generative process, applied to spatial locations.


international conference on robotics and automation | 2013

Communication between Lingodroids with different cognitive capabilities

Scott Heath; David Ball; Ruth Schulz; Janet Wiles

Previous studies have shown how Lingodroids, language learning mobile robots, learn terms for space and time, connecting their personal maps of the world to a publically shared language. One caveat of previous studies was that the robots shared the same cognitive architecture, identical in all respects from sensors to mapping systems. In this paper we investigate the question of how terms for space can be developed between robots that have fundamentally different sensors and spatial representations. In the real world, communication needs to occur between agents that have different embodiment and cognitive capabilities, including different sensors, different representations of the world, and different species (including humans). The novel aspects of these studies is that one robot uses a forward facing camera to estimate appearance and uses a biologically inspired continuous attractor network to generate a topological map; the other robot uses a laser scanner to estimate range and uses a probabilistic filter approach to generate an occupancy grid. The robots hold conversations in different locations to establish a shared language. Despite their different ways of sensing and mapping the world, the robots are able to create coherent lexicons for the space around them.


Adaptive Behavior | 2012

Beyond here-and-now: extending shared physical experiences to shared conceptual experiences

Ruth Schulz; Gordon Wyeth; Janet Wiles

For robots to use language effectively, they need to refer to combinations of existing concepts, as well as concepts that have been directly experienced. In this paper, we introduce the term generative grounding to refer to the establishment of shared meaning for concepts referred to using relational terms. We investigated a spatial domain, which is both experienced and constructed using mobile robots with cognitive maps. The robots, called Lingodroids, established lexicons for locations, distances, and directions through structured conversations called where-are-we, how-far, what-direction, and where-is-there conversations. Distributed concept construction methods were used to create flexible concepts, based on a data structure called a distributed lexicon table. The lexicon was extended from words for locations, termed toponyms, to words for the relational terms of distances and directions. New toponyms were then learned using these relational operators. Effective grounding was tested by using the new toponyms as targets for go-to games, in which the robots independently navigated to named locations. The studies demonstrate how meanings can be extended from grounding in shared physical experiences to grounding in constructed cognitive experiences, giving the robots a language that refers to their direct experiences, and to constructed worlds that are beyond the here-and-now.


Science & Engineering Faculty | 2012

Language change in socially structured populations

Ruth Schulz; Matthew Whittington; Janet Wiles

Language contact is a significant external social factor that impacts on the change in natural languages over time. In some circumstances this corresponds to language competition, in which individuals in a population choose one language over another based on their social interactions. We investigated the dynamics of language change in two initially separate populations of agents that were then mixed with levels of influence determined by the social classes of the two populations, with 16 different combinations tested. As expected, the study found that how the communities interact with each other impacts on the communal language developed. However, it was also found that the acquisition of new words was substantial even with limited interaction between populations and low levels of influence, and that comprehension could be well established across language groups even when production of words from the other language group was low.


Faculty of Built Environment and Engineering; Science & Engineering Faculty | 2011

The RatSLAM project: Robot spatial navigation

Gordon Wyeth; Michael Milford; Ruth Schulz; Janet Wiles

Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimize the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban structures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).

Collaboration


Dive into the Ruth Schulz's collaboration.

Top Co-Authors

Avatar

Janet Wiles

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Talbot

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

David Ball

Peter MacCallum Cancer Centre

View shared research outputs
Top Co-Authors

Avatar

Feras Dayoub

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Heath

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Obadiah Lam

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge