Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Jenkin is active.

Publication


Featured researches published by Michael Jenkin.


Autonomous Robots | 1996

A Taxonomy for Multi-Agent Robotics*

Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes

A key difficulty in the design of multi-agent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multi-agent systems according to communication, computational and other capabilities. We survey existing efforts involving multi-agent systems according to their positions in the taxonomy. We also present additional results concerning multi-agent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective.


Cvgip: Image Understanding | 1991

Phase-based disparity measurement

David J. Fleet; Allan D. Jepson; Michael Jenkin

Abstract The measurement of image disparity is a fundamental precursor to binocular depth estimation. Recently, Jenkin and Jepson (in Computational Processes in Human Vision (V. Pylyshyn, Ed.), Ablex, New Jersey, 1988) and Sanger (Biol. Cybernet, 59, 1988 , 405–418) described promising methods based on the output phase behavior of bandpass Gabor filters. Here we discuss further justification for such techniques based on the stability of bandpass phase behavior as a function of typical distortions that exist between left and right views. In addition, despite this general stability, we show that phase signals are occasionally very sensitive to spatial position and to variations in scale, in which cases incorrect measurements occur. We find that the primary cause for this instability is the existence of singularities in phase signals. With the aid of the local frequency of the filter output (provided by the phase derivative) and the local amplitude information, the regions of phase instability near the singularities are detected so that potentially incorrect measurements can be identified. In addition, we show how the local frequency can be used away from the singularity neighbourhoods to improve the accuracy of the disparity estimates. Some experimental results are reported.


international conference on robotics and automation | 1991

Robotic exploration as graph construction

Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes

Addressed is the problem of robotic exploration of a graphlike world, where no distance or orientation metric is assumed of the world. The robot is assumed to be able to autonomously traverse graph edges, recognize when it has reached a vertex, and enumerate edges incident upon the current vertex relative to the edge via which it entered the current vertex. The robot cannot measure distances, and it does not have a compass. It is demonstrated that this exploration problem is unsolvable in general without markers, and, to solve it, the robot is equipped with one or more distinct markers that can be put down or picked up at will and that can be recognized by the robot if they are at the same vertex as the robot. An exploration algorithm is developed and proven correct. Its performance is shown on several example worlds, and heuristics for improving its performance are discussed. >


Experimental Brain Research | 2000

Visual and non-visual cues in the perception of linear self-motion.

Laurence R. Harris; Michael Jenkin; Daniel C. Zikovitz

Abstract. Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.


ieee virtual reality conference | 2001

Tolerance of temporal delay in virtual environments

Robert S. Allison; Laurence R. Harris; Michael Jenkin; Urszula Jasiobedzka; James E. Zacher

To enhance presence, facilitate sensory motor performance, and avoid disorientation or nausea, virtual-reality applications require the perception of a stable environment. End-end tracking latency (display lag) degrades this illusion of stability and has been identified as a major fault of existing virtual-environment systems. Oscillopsia refers to the perception that the visual world appears to swim about or oscillate in space and is a manifestation of this loss of perceptual stability of the environment. The effects of end-end latency and head velocity on perceptual stability in a virtual environment were investigated psychophysically. Subjects became significantly more likely to report oscillopsia during head movements when end-end latency or head velocity were increased. It is concluded that perceptual instability of the world arises with increased head motion and increased display lag. Oscillopsia is expected to be more apparent in tasks requiring real locomotion or rapid head movement.


intelligent robots and systems | 1993

A taxonomy for swarm robots

Gregory Dudek; Michael Jenkin; Evangelos E. Milios; David Wilkes

In many cases several mobile robots (autonomous agents) can be used together to accomplish tasks that would be either more difficult or impossible for a robot acting alone. Many different models have been suggested for the makeup of such collections of robots. In this paper the authors present a taxonomy of the different ways in which such a collection of autonomous robotic agents can be structured. It is shown that certain swarms provide little or no advantage over having a single robot, while other swarms can obtain better than linear speedup over a single robot. There exist both trivial and non-trivial problems for which a swarm of robots can succeed where a single robot will fail. Swarms are more than just networks of independent processors - they are potentially reconfigurable networks of communicating agents capable of coordinated sensing and interaction with the environment.


IEEE Computer | 2007

AQUA: An Amphibious Autonomous Robot

Gregory Dudek; Philippe Giguère; Chris Prahacs; Shane Saunderson; Junaed Sattar; Luz Abril Torres-Méndez; Michael Jenkin; Andrew German; Andrew Hogue; Arlene Ripsman; James E. Zacher; Evangelos E. Milios; Hui Liu; Pifu Zhang; Martin Buehler; Christina Georgiades

AQUA, an amphibious robot that swims via the motion of its legs rather than using thrusters and control surfaces for propulsion, can walk along the shore, swim along the surface in open water, or walk on the bottom of the ocean. The vehicle uses a variety of sensors to estimate its position with respect to local visual features and provide a global frame of reference


Archive | 2001

Vision and Attention

Michael Jenkin; Laurence R. Harris

The term “visual attention” embraces many aspects of vision. It refers to processes that find, pull out and may possibly even help to define, features in the visual environment. All these processes take the form of interactions between the observer and the environment: attention is drawn by some aspects of the visual scene but the observer is critical in defining which aspects are selected.


intelligent robots and systems | 2005

A visually guided swimming robot

Gregory Dudek; Michael Jenkin; Chris Prahacs; Andrew Hogue; Junaed Sattar; Philippe Giguère; Andrew German; Hui Liu; Shane Saunderson; Arlene Ripsman; Saul Simhon; Luz Abril Torres; Evangelos E. Milios; Pifu Zhang; Ioannis Rekletis

We describe recent results obtained with AQUA, a mobile robot capable of swimming, walking and amphibious operation. Designed to rely primarily on visual sensors, the AQUA robot uses vision to navigate underwater using servo-based guidance, and also to obtain high-resolution range scans of its local environment. This paper describes some of the pragmatic and logistic obstacles encountered, and provides an overview of some of the basic capabilities of the vehicle and its associated sensors. Moreover, this paper presents the first ever amphibious transition from walking to swimming.


intelligent robots and systems | 2004

AQUA: an aquatic walking robot

Christina Georgiades; Andrew German; Andrew Hogue; Hui Liu; Chris Prahacs; Arlene Ripsman; Robert Sim; Luz-Abril Torres; Pifu Zhang; Martin Buehler; Gregory Dudek; Michael Jenkin; Evangelos E. Milios

This paper describes an underwater walking robotic system being developed under the name AQUA, the goals of the AQUA project, the overall hardware and software design, the basic hardware and sensor packages that have been developed, and some initial experiments. The robot is based on the RHex hexapod robot and uses a suite of sensing technologies, primarily based on computer vision and INS, to allow it to navigate and map clear shallow-water environments. The sensor-based navigation and mapping algorithms are based on the use of both artificial floating visual and acoustic landmarks as well as on naturally occurring underwater landmarks and trinocular stereo.

Collaboration


Dive into the Michael Jenkin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bill Kapralos

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge