Ian Horswill
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ian Horswill.
Ai Magazine | 2003
Reid G. Simmons; Dani Goldberg; Adam Goode; Michael Montemerlo; Nicholas Roy; Brennan Sellner; Chris Urmson; Alan C. Schultz; Myriam Abramson; William Adams; Amin Atrash; Magdalena D. Bugajska; Michael J. Coblenz; Matt MacMahon; Dennis Perzanowski; Ian Horswill; Robert Zubek; David Kortenkamp; Bryn Wolfe; Tod Milam; Bruce Allen Maxwell
In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry, and government integrated their research into a single robot named GRACE. This article describes this first-year effort by the GRACE team, including not only the various techniques each participant brought to GRACE but also the difficult integration effort itself.
Artificial Intelligence | 1995
Ian Horswill
Abstract Designers often improve the performance of artificial agents by specializing them. We can make a rough, but useful distinction between specialization to a task and specialization to an environment. Specialization to an environment can be difficult to understand: it may be unclear on what properties of the environment the agent depends, or in what manner it depends on each individual property. In this paper, I discuss a method for analyzing specialization into a series of conditional optimizations: formal transformations which, given some constraint on the environment, map mechanisms to more efficient mechanisms with equivalent behavior. I apply the technique to the analysis of the vision and control systems of a working robot system in day to day use in our laboratory. The method is not intended as a general theory for automated synthesis of arbitrary specialized agents. Nonetheless, it can be used to perform post-hoc analysis of agents so as to make explicit the environment properties required by the agent and the computational value of each property. This post-hoc analysis helps explain performance in normal environments and predict performance in novel environments. In addition, the transformations brought out in the analysis of one system can be reused in the synthesis of future systems.
intelligent robots and systems | 1994
Ian Horswill
Visual collision avoidance involves two difficult subproblems: obstacle recognition and depth measurement. We present a class of algorithms that use particularly simple methods for each subproblem and derive a set of sufficient conditions for their proper functioning based on a set of idealizations. We then discuss and compare two different implementations of the approach on mobile robots and discuss their performance. Finally, we experimentally validate the idealizations.<<ETX>>
computational intelligence in robotics and automation | 1999
Ian Horswill
In this paper, I describe a simple functional programming language, GRL, in which most of the characteristic features of the popular behavior-based robot architectures can be concisely written as reusable software abstractions. This makes it easier to write clear, modular code, to “mix and match” arbitration mechanisms, and to experiment with variations on existing mechanisms. I describe the compilation process for the language, our experiences with it, and issues of efficiency, expressiveness, and code size relative to other languages.
IEEE Transactions on Computational Intelligence and Ai in Games | 2009
Ian Horswill
In this paper, we describe Twig, a fast, AI-friendly procedural animation system that supports easy authoring of new behaviors. The system provides a simplified dynamic simulation that is specifically designed to be easy to control. Characters are controlled by applying external forces directly to body parts, rather than by simulating joint torques. This ldquopuppetry-stylerdquo of control provides the simplicity of kinematic control within an otherwise dynamic simulation. Although less realistic than motion capture or full biomechanical simulation, Twig produces compelling, responsive character behavior. Moreover, it is fast, stable, supports believable physical interactions between characters such as hugging, punching, and dragging, and makes it easy to author new behaviors.
Autonomous Robots | 1998
Ian Horswill
We describe a uniform technique for representing both sensory data and the attentional state of an agent using a subset of modal logic with indexicals. The resulting representation maps naturally into feed-forward parallel networks or can be implemented on stock hardware using bit-mask instructions. The representation has “circuit-semantics” (Nilsson, 1994, Rosenschein and Kaelbling, 1986), but can efficiently represent propositions containing modals, unary predicates, and functions. We describe an example using Kludge, a vision-based mobile robot programmed to perform simple natural language instructions involving fetching and following tasks.
Journal of Experimental and Theoretical Artificial Intelligence | 1997
Ian Horswill
Traditional architectures have fundamental epistemological problems. Perception is inherently resource limited so controlling perception involves all the same AI-complete problems of reasoning about time and resources as the full-scale planning problem. Allowing a planner to transparently assume that the information it needs will automatically be present and up-to-date in the model thus presupposes a solution to a problem at least as difficult as planning itself. Although one can imagine many possible solutions to this problem, such as allowing the planner to recurse on its own epistemological problems, there have been no convincing attempts at this. In this paper, I compare behaviour-based and traditional systems in terms of their representational power and the strengths of their implicit epistemological theories. I argue that both have serious limitations and that those limitations are not addressed simply by joining the two into a hybrid. I discuss my work with using vision to support real-time activit...
Lecture Notes in Computer Science | 2003
Magy Seif El-Nasr; Ian Horswill
Lighting design is an important element of scene composition. Designers use light to influence viewers’ perception by evoking moods, directing their gaze to important areas, and conveying dramatic tension. Lighting is a very time consuming task; designers typically spend hours manipulating lights’ colors, positions, and angles to create a lighting design that accommodates dramatic action and tension. Such manual design is inappropriate for interactive narrative, because the scene’s spatial and dramatic characteristics, including dramatic tension and character actions, change unpredictably, necessitating continual redesign as the scene progresses. In this paper, we present a lighting design system, called ELE (Expressive Lighting Engine), that automatically, in real-time, adjusts angles, positions, and colors of lights to accommodate variations in the scene’s dramatic and spatial characteristics accommodating cinematic and theatrical lighting design theory. ELE uses constraint-based non-linear optimization algorithms to configure lights.
international conference on robotics and automation | 2002
Aaron Khoo; Ian Horswill
Most physically implemented multi-robot controllers are based on extensions of behavior-based systems. While efficient, such techniques suffer from a paucity of representational power. Symbolic systems, on the other hand, have more sophisticated representations but are computationally complex and have model coherency issues. We describe HIVEMind, a tagged behavior-based architecture for small teams of cooperative robots. In HIVEMind, robots share inferences and sensory data by treating other team members as virtual sensors connected by wireless links. A representation based on bit-vectors allows team members to share intentional, attentional, and sensory information using relatively low-bandwidth connections. We describe an application of the architecture to the problem of systematic spatial search.
Robotics and Autonomous Systems | 2003
Aaron Khoo; Ian Horswill
Abstract Traditional symbolic reasoning systems are typically built on a transaction model of computation, which complicates the process of synchronizing their world models with changes in a dynamic environment. This problem is exacerbated in the multi-robot case, where there are now n world models keep in synch. In this paper, we describe an inference grounding and coordination mechanism for robot teams based on tagged behavior-based systems. This approach supports a large subset of classical AI techniques while providing a novel representation that allows team members to share information efficiently. We illustrate our approach on two problems involving systematic spatial search.