David Wilkes
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Wilkes.
Autonomous Robots | 1996
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
A key difficulty in the design of multi-agent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multi-agent systems according to communication, computational and other capabilities. We survey existing efforts involving multi-agent systems according to their positions in the taxonomy. We also present additional results concerning multi-agent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective.
international conference on robotics and automation | 1991
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
Addressed is the problem of robotic exploration of a graphlike world, where no distance or orientation metric is assumed of the world. The robot is assumed to be able to autonomously traverse graph edges, recognize when it has reached a vertex, and enumerate edges incident upon the current vertex relative to the edge via which it entered the current vertex. The robot cannot measure distances, and it does not have a compass. It is demonstrated that this exploration problem is unsolvable in general without markers, and, to solve it, the robot is equipped with one or more distinct markers that can be put down or picked up at will and that can be recognized by the robot if they are at the same vertex as the robot. An exploration algorithm is developed and proven correct. Its performance is shown on several example worlds, and heuristics for improving its performance are discussed. >
intelligent robots and systems | 1993
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
In many cases several mobile robots (autonomous agents) can be used together to accomplish tasks that would be either more difficult or impossible for a robot acting alone. Many different models have been suggested for the makeup of such collections of robots. In this paper the authors present a taxonomy of the different ways in which such a collection of autonomous robotic agents can be structured. It is shown that certain swarms provide little or no advantage over having a single robot, while other swarms can obtain better than linear speedup over a single robot. There exist both trivial and non-trivial problems for which a swarm of robots can succeed where a single robot will fail. Swarms are more than just networks of independent processors - they are potentially reconfigurable networks of communicating agents capable of coordinated sensing and interaction with the environment.
computer vision and pattern recognition | 1992
David Wilkes; John K. Tsotsos
The concept of active object recognition is introduced, and a proposal for its solution is described. The camera is mounted on the end of a robot arm on a mobile base. The system exploits the mobility of the camera by using low-level image data to drive the camera to a standard viewpoint with respect to an unknown object. From such a viewpoint, the object recognition task is reduced to a two-dimensional pattern recognition problem. The system uses an efficient tree-based, probabilistic indexing scheme to find the model object that is likely to have generated the observed data, and for line tracking uses a modification of the token-based tracking scheme of J.L. Crowley et al. (1988). The system has been successfully tested on a set of origami objects. Given sufficiently accurate low-level data, recognition time is expected to grow only logarithmically with the number of objects stored.<<ETX>>
Robotics and Autonomous Systems | 1998
S. B. Nickerson; Piotr Jasiobedzki; David Wilkes; Michael Jenkin; Evangelos E. Milios; John K. Tsotsos; Allan D. Jepson; O. N. Bains
The ARK mobile robot project has designed and implemented a series of mobile robots capable of navigating within industrial environments without relying on artificial landmarks or beacons. The ARK robots employ a novel sensor, Laser Eye, that combines vision and laser ranging to efficiently locate the robot in a map of its environment. Laser Eye allows self-location of the robot in both walled and open areas. Navigation in walled areas is carried out by matching 2D laser range scans, while navigation in open areas is carried out by visually detecting landmarks and measuring their azimuth, elevation and range with respect to the robot. In addition to solving the core tasks of pose estimation and navigation, the ARK robots address the tasks of sensing for safety and operator interaction.
Robotics and Autonomous Systems | 1997
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
Abstract This paper deals with the validation of topological maps of an environment by an active agent (such as a mobile robot), and the localization of an agent in a given map. The agent is assumed to have neither compass nor other instruments for measuring orientation or distance, and, therefore, no associated metrics. The topological maps considered are similar to conventional graphs. The robot is assumed to have enough sensory capability to traverse graph edges autonomously, recognize when it has reached a vertex, and enumerate edges incident upon the current vertex, locating them relative to the edge via which it entered the current vertex. In addition, the robot has access to a set of visually detectable, portable, distinct markers. We present algorithms, along with worst case complexity bounds and experimental results for representative classes of graphs for two fundamental problems. The first problem is the validation problem : if the robot is given an input map and its current position and orientation with respect to the map, determine whether the map is correct. The second problem is the self-location problem : Given only a map of the environment, determine the position of the robot and its “orientation” (i.e., the correspondence between edges of the map and edges in the world at the robots position). Finally, we consider the power of some other non-metric aids in exploration.
intelligent robots and systems | 1995
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
This paper deals with coordinating behaviour in a multi-autonomous robot system. When two or more autonomous robots must interact in order to accomplish some common goal, communication between the robots is essential. Different inter-robot communications strategies give rise to different overall system performance and reliability. After a brief consideration of some theoretical approaches to multiple robot collections, we present concrete implementations of different strategies for convoy-like behaviour. The convoy system is based around two RWI B12 mobile robots and uses only passive visual sensing for inter-robot communication. The issues related to different communication strategies are considered.
visual communications and image processing | 1990
Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes
A fundamental problem in robotics is that of exploring an unknown environment. Most current approaches to exploration make use of a global distance metric that is used to relate past sensory experiences to local measurements. Rather than rely on such an assumption we consider the more general problem of exploration without a distance metric, as is typical of exploring using only visual information: we propose robot exploration as graph building. In earlier papers we have shown that it is not possible for a robot to successfully explore a metricless environment without aid, but that by augmenting the robot with a single marker (which can be put down or picked up at will) it is possible for a robot to map its environment[1]. In this paper we present the extension of our algorithm to the case of k markers, and comment on the resulting decrease in time for exploration. By defining a minimal model for the world and the sensory ability of the robot, we separate spatial reasoning from visual perception. In this paper we deal only with the spa-tial reasoning component of the exploration problem, and assume that visual perception can identify the marker and the edges incident on the current location.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999
Sven J. Dickinson; David Wilkes; John K. Tsotsos
We quantify the observation by Kender and Freudenstein (1987) that degenerate views occupy a significant fraction of the viewing sphere surrounding an object. For a perspective camera geometry, we introduce a computational model that can be used to estimate the probability that a view degeneracy will occur in a random view of a polyhedral object. For a typical recognition system parameterization, view degeneracies typically occur with probabilities of 20 percent and, depending on the parameterization, as high as 50 percent. We discuss the impact of view degeneracy on the problem of object recognition and, for a particular recognition framework, relate the cost of object disambiguation to the probability of view degeneracy. To reduce this cost, we incorporate our model of view degeneracy in an active focal length control paradigm that balances the probability of view degeneracy with the camera field of view. In order to validate both our view degeneracy model as well as our active focal length control model, a set of experiments are reported using a real recognition system operating on real images.
international conference on computer vision | 1995
David Wilkes; Sven J. Dickinson; John K. Tsotsos
We quantify the observation by Kender and Freudenstein (1987) that degenerate views occupy a significant fraction of the viewing sphere surrounding an object. This demonstrates that systems for recognition must explicitly account for the possibility of view degeneracy. We show that view degeneracy cannot be detected from a single camera viewpoint. As a result, systems designed to recognize objects from a single arbitrary viewpoint must be able to function in spite of possible undetected degeneracies, or else operate with imaging parameters that cause acceptably low probabilities of degeneracy. To address this need, we give a prescription for active control of focal length that allows a principled tradeoff between the camera field of view and probability of view degeneracy.<<ETX>>