Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Modayil is active.

Publication


Featured researches published by Joseph Modayil.


international conference on robotics and automation | 2004

Local metrical and global topological maps in the hybrid spatial semantic hierarchy

Benjamin Kuipers; Joseph Modayil; Patrick Beeson; Matt MacMahon; Francesco Savelli

Topological and metrical methods for representing spatial knowledge have complementary strengths. We present a hybrid extension to the spatial semantic hierarchy that combines their strengths and avoids their weaknesses. Metrical SLAM methods are used to build local maps of small-scale space within the sensory horizon of the agent, while topological methods are used to represent the structure of large-scale space. We describe how a local perceptual map is analyzed to identify a local topology description and is abstracted to a topological place. The map building method creates a set of topological map hypotheses that are consistent with travel experience. The set of maps is guaranteed under reasonable assumptions to include the correct map. We demonstrate the method on a real environment with multiple nested large-scale loops.


The International Journal of Robotics Research | 2010

Factoring the Mapping Problem: Mobile Robot Map-building in the Hybrid Spatial Semantic Hierarchy

Patrick Beeson; Joseph Modayil; Benjamin Kuipers

We propose a factored approach to mobile robot map-building that handles qualitatively different types of uncertainty by combining the strengths of topological and metrical approaches. Our framework is based on a computational model of the human cognitive map; thus it allows robust navigation and communication within several different spatial ontologies. This paper focuses exclusively on the issue of map-building using the framework. Our approach factors the mapping problem into natural sub-goals: building a metrical representation for local small-scale spaces; finding a topological map that represents the qualitative structure of large-scale space; and (when necessary) constructing a metrical representation for large-scale space using the skeleton provided by the topological map. We describe how to abstract a symbolic description of the robot’s immediate surround from local metrical models, how to combine these local symbolic models in order to build global symbolic models, and how to create a globally consistent metrical map from a topological skeleton by connecting local frames of reference.


ubiquitous computing | 2008

Improving the recognition of interleaved activities

Joseph Modayil; Tongxin Bai; Henry A. Kautz

We introduce Interleaved Hidden Markov Models for recognizing multitasked activities. The model captures both inter-activity and intra-activity dynamics. Although the state space is intractably large, we describe an approximation that is both effective and efficient. This method significantly reduces the error rate when compared with previously proposed methods. The algorithm is suitable for mobile platforms where computational resources may be limited.


intelligent robots and systems | 2004

Bootstrap learning for object discovery

Joseph Modayil; Benjamin Kuipers

We show how a robot can autonomously learn an ontology of objects to explain aspects of its sensor input from an unknown dynamic world. Unsupervised learning about objects is an important conceptual step in developmental learning, whereby the agent clusters observations across space and time to construct stable perceptual representations of objects. Our proposed unsupervised learning method uses the properties of allocentric occupancy grids to classify individual sensor readings as static or dynamic. Dynamic readings are clustered and the clusters are tracked over time to identify objects, separating them both from the background of the environment and from the noise of unexplainable sensor readings. Once trackable clusters of sensor readings (i.e., objects) have been identified, we build shape models where they are stable and consistent properties of these objects. However, the representation can tolerate, represent, and track amorphous objects as well as those that have well-defined shape. In the end, the learned ontology makes it possible for the robot to describe a cluttered dynamic world with symbolic object descriptions along with a static environment model, both models grounded in sensory experience, and learned without external supervision.


Connection Science | 2006

Bootstrap learning of foundational representations

Benjamin Kuipers; Patrick Beeson; Joseph Modayil; Jefferson Provost

To be autonomous, intelligent robots must learn the foundations of commonsense knowledge from their own sensorimotor experience in the world. We describe four recent research results that contribute to a theory of how a robot learning agent can bootstrap from the ‘blooming buzzing confusion’ of the pixel level to a higher level ontology including distinctive states, places, objects, and actions. This is not a single learning problem, but a lattice of related learning tasks, each providing prerequisites for tasks to come later. Starting with completely uninterpreted sense and motor vectors, as well as an unknown environment, we show how a learning agent can separate the sense vector into modalities, learn the structure of individual modalities, learn natural primitives for the motor system, identify reliable relations between primitive actions and created sensory features, and can define useful control laws for homing and path-following. Building on this framework, we show how an agent can use self-organizing maps to identify useful sensory features in the environment, and can learn effective hill-climbing control laws to define distinctive states in terms of those features, and trajectory-following control laws to move from one distinctive state to another. Moving on to place recognition, we show how an agent can combine unsupervised learning, map-learning, and supervised learning to achieve high-performance recognition of places from rich sensory input. Finally, we take the first steps toward learning an ontology of objects, showing that a bootstrap learning robot can learn to individuate objects through motion, separating them from the static environment and from each other, and can learn properties useful for classification. These are four key steps in a larger research enterprise on the foundations of human and robot commonsense knowledge.


intelligent robots and systems | 2004

Using the topological skeleton for scalable global metrical map-building

Joseph Modayil; Patrick Beeson; Benjamin Kuipers

Most simultaneous localization and mapping (SLAM) approaches focus on purely metrical approaches to map-building. We present a method for computing the global metrical map that builds on the structure provided by a topological map. This allows us to factor the uncertainty in the map into local metrical uncertainty (which is handled well by existing SLAM methods), global topological uncertainty (which is handled well by recently developed topological map-learning methods), and global metrical uncertainty (which can be handled effectively once the other types of uncertainty are factored out). We believe that this method for building the global metrical map is scalable to very large environments.


Robotics and Autonomous Systems | 2008

The initial development of object knowledge by a learning robot

Joseph Modayil; Benjamin Kuipers

We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control.


Adaptive Behavior | 2014

Multi-timescale nexting in a reinforcement learning robot

Joseph Modayil; Adam White; Richard S. Sutton

The term ‘nexting’ has been used by psychologists to refer to the propensity of people and many other animals to continually predict what will happen next in an immediate, local, and personal sense. The ability to ‘next’ constitutes a basic kind of awareness and knowledge of one’s environment. In this paper we present results with a robot that learns to next in real time, making thousands of predictions about sensory input signals at timescales from 0.1 to 8 seconds. Our predictions are formulated as a generalization of the value functions commonly used in reinforcement learning, where now an arbitrary function of the sensory input signals is used as a pseudo reward, and the discount rate determines the timescale. We show that six thousand predictions, each computed as a function of six thousand features of the state, can be learned and updated online ten times per second on a laptop computer, using the standard temporal-difference(λ) algorithm with linear function approximation. This approach is sufficiently computationally efficient to be used for real-time learning on the robot and sufficiently data efficient to achieve substantial accuracy within 30 minutes. Moreover, a single tile-coded feature representation suffices to accurately predict many different signals over a significant range of timescales. We also extend nexting beyond simple timescales by letting the discount rate be a function of the state and show that nexting predictions of this more general form can also be learned with substantial accuracy. General nexting provides a simple yet powerful mechanism for a robot to acquire predictive knowledge of the dynamics of its environment.


canadian conference on computer and robot vision | 2006

Building Local Safety Maps for a Wheelchair Robot using Vision and Lasers

Aniket Murarka; Joseph Modayil; Benjamin Kuipers

To be useful as a mobility assistant for a human driver, an intelligent robotic wheelchair must be able to distinguish between safe and hazardous regions in its immediate environment. We present a hybrid method using laser rangefinders and vision for building local 2D metrical maps that incorporate safety information (called local safety maps). Laser range-finders are used for localization and mapping of obstacles in the 2D laser plane, and vision is used for detection of hazards and other obstacles in 3D space. The hazards and obstacles identified by vision are projected into the travel plane of the robot and combined with the laser map to construct the local 2D safety map. The main contributions of this work are (i) the definition of a local 2D safety map, (ii) a hybrid method for building the safety map, and (iii) a method for removing noise from dense stereo data using motion.


international conference on development and learning | 2010

Discovering sensor space: Constructing spatial embeddings that explain sensor correlations

Joseph Modayil

A fundamental task for a developing agent is to build models that explain its uninterpreted sensory-motor experience. This paper describes an algorithm that constructs a sensor space from sensor correlations, namely the algorithm generates a spatial embedding of sensors where strongly correlated sensors will be neighbors in the embedding. The algorithm first infers a sensor correlation distance and then applies the fast maximum variance unfolding algorithm to generate a distance preserving embedding. Although previous work has shown how sensor embeddings can be constructed, this paper provides a framework for understanding sensor embedding, introduces a sensor correlation distance, and demonstrates embeddings for thousands of sensors on intrinsically curved manifolds.

Collaboration


Dive into the Joseph Modayil's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Beeson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt MacMahon

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge