Daniel Stronger
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Stronger.
Connection Science | 2006
Daniel Stronger; Peter Stone
This article presents a novel methodology for a robot to autonomously induce models of its actions and sensors called ASAMI (autonomous sensor and actuator model induction). While previous approaches to model learning rely on an independent source of training data, we show how a robot can induce action and sensor models without any well-calibrated feedback. Specifically, the only inputs to the ASAMI learning process are the data the robot would naturally have access to: its raw sensations and knowledge of its own action selections. From the perspective of developmental robotics, our robot’s goal is to obtain self-consistent internal models, rather than to perform any externally defined tasks. Furthermore, the target function of each model-learning process comes from within the system, namely the most current version of another internal system model. Concretely realizing this model-learning methodology presents a number of challenges, and we introduce a broad class of settings in which solutions to these challenges are presented. ASAMI is fully implemented and tested, and empirical results validate our approach in a robotic testbed domain using a Sony Aibo ERS-7 robot.
Robotics and Autonomous Systems | 2006
Peter Stone; Mohan Sridharan; Daniel Stronger; Gregory Kuhlmann; Nate Kohl; Peggy Fidelman; Nicholas K. Jong
Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for (i) reducing uncertainty and (ii) making decisions in the face of uncertainty. We present a complete vision-based robotic system that includes several algorithms for learning models that are useful and necessary for planning, and then place particular emphasis on the planning and decision-making capabilities of the robot. Specifically, we present models for autonomous color calibration, autonomous sensor and actuator modeling, and an adaptation of particle filtering for improved localization on legged robots. These contributions enable effective planning under uncertainty for robots engaged in goaloriented behavior within a dynamic, collaborative and adversarial environment. Each of our algorithms is fully implemented and tested on a commercial off-the-shelf vision-based quadruped robot. c 2006 Elsevier B.V. All rights reserved.
international conference on robotics and automation | 2005
Daniel Stronger; Peter Stone
This paper presents a technique for the Simultaneous Calibration of Action and Sensor Models (SCASM) on a mobile robot. While previous approaches to calibration make use of an independent source of feedback, SCASM is unsupervised, in that it does not receive any well-calibrated feedback about its location. Starting with only an inaccurate action model, it learns accurate relative action and sensor models. Furthermore, SCASM is fully autonomous, in that it operates with no human supervision. SCASM is fully implemented and tested on a Sony Aibo ERS-7 robot.
robot soccer world cup | 2006
Daniel Stronger; Peter Stone
Autonomous robots can use a variety of sensors, such as sonar, laser range finders, and bump sensors, to sense their environments. Visual information from an onboard camera can provide particularly rich sensor data. However, processing all the pixels in every image, even with simple operations, can be computationally taxing for robots equipped with cameras of reasonable resolution and frame rate. This paper presents a novel method for a legged robot equipped with a camera to use selective visual attention to efficiently recognize objects in its environment. The resulting attention-based approach is fully implemented and validated on an Aibo ERS-7. It effectively processes incoming images 50 times faster than a baseline approach, with no significant difference in the efficacy of its object detection.
robot soccer world cup | 2005
Daniel Stronger; Peter Stone
Despite efforts to design precise motor controllers, robot joints do not always move exactly as desired. This paper introduces a general model-based method for improving the accuracy of joint control. First, a model that predicts the effects of joint requests is built based on empirical data. Then this model is approximately inverted to determine the control requests that will most closely lead to the desired movements. We implement and validate this approach on a popular, commercially available robot, the Sony Aibo ERS-210A.
International Journal on Artificial Intelligence Tools | 2008
Daniel Stronger; Peter Stone
In order for an autonomous agent to behave robustly in a variety of environments, it must have the ability to learn approximations to many different functions. The function approximator used by such an agent is subject to a number of constraints that may not apply in a traditional supervised learning setting. Many different function approximators exist and are appropriate for different problems. This paper proposes a set of criteria for function approximators for autonomous agents. Additionally, for those problems on which polynomial regression is a candidate technique, the paper presents an enhancement that meets these criteria. In particular, using polynomial regression typically requires a manual choice of the polynomials degree, trading off between function accuracy and computational and memory efficiency. Polynomial Regression with Automated Degree (PRAD) is a novel function approximation method that uses training data to automatically identify an appropriate degree for the polynomial. PRAD is fully implemented. Empirical tests demonstrate its ability to efficiently and accurately approximate both a wide variety of synthetic functions and real-world data gathered by a mobile robot.
international conference on robotics and automation | 2007
Daniel Stronger; Peter Stone
This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robots estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it walks through a color-coded test-bed domain. This papers contributions are an exposition of two different approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method.
robot soccer world cup | 2008
Uli Grasemann; Daniel Stronger; Peter Stone
The joint controllers used in robots like the Sony Aibo are designed for the task of moving the joints of the robot to a given position. However, they are not well suited to the problem of making a robot move through a desired trajectory at speeds close to the physical capabilities of the robot, and in many cases, they cannot be bypassed easily. In this paper, we propose an approach that models both the robots joints and its built-in controllers as a single system that is in turn controlled by a neural network. The neural network controls the entire trajectory of a robot instead of just its static position. We implement and evaluate our approach on a Sony Aibo ERS-7.
international conference on robotics and automation | 2008
Daniel Stronger; Peter Stone
In order for a mobile robot to accurately interpret its sensations and predict the effects of its actions, it must have accurate models of its sensors and actuators. These models are typically tuned manually, a brittle and laborious process. Autonomous model learning is a promising alternative to manual calibration, but previous work has assumed the presence of an accurate action or sensor model in order to train the other model. This paper presents an adaptation of the Expectation-Maximization (EM) algorithm to enable a mobile robot to learn both its action and sensor model functions, starting without an accurate version of either. The resulting algorithm is validated experimentally both on a Sony Aibo ERS-7 robot and in simulation.
international conference on tools with artificial intelligence | 2006
Daniel Stronger; Peter Stone
In order for an autonomous agent to behave robustly in a variety of environments, it must have the ability to learn approximations to many different functions. The function approximator used by such an agent is subject to a number of constraints that may not apply in a traditional supervised learning setting. Many different function approximators exist and are appropriate for different problems. This paper proposes a set of criteria for function approximators for autonomous agents. Additionally, for those problems on which polynomial regression is a candidate technique, the paper presents an enhancement that meets these criteria. In particular, using polynomial regression typically requires a manual choice of the polynomials degree, trading off between function accuracy and computational and memory efficiency. Polynomial regression with automated degree (PRAD) is a novel function approximation method that uses training data to automatically identify an appropriate degree for the polynomial. PRAD is fully implemented. Empirical tests demonstrate its ability to efficiently and accurately approximate both a wide variety of synthetic functions and real-world data gathered by a mobile robot