Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geoffrey R. Taylor is active.

Publication


Featured researches published by Geoffrey R. Taylor.


international conference on robotics and automation | 2005

Interactive SLAM using Laser and Advanced Sonar

Geoffrey R. Taylor; Lindsay Kleeman

This paper presents a novel approach to mapping for mobile robots that exploits user interaction to semiautonomously create a labelled map of the environment. The robot autonomously follows the user and is provided with a verbal commentary on the current location with phrases such as “Robot, we are in the office”. At the same time, a metric feature map is generated using fusion of laser and advanced sonar measurements in a Kalman filter based SLAM framework, which is later used for localization. When mapping is complete, the robot generates an occupancy grid for use in global task planning. The occupancy grid is created using a novel laser scan registration scheme that relies on storing the path of the robot along with associated local SLAM features during mapping, and later recovering the path by matching the associated local features to the final SLAM map. The occupancy grid is segmented into labelled rooms using an algorithm based on watershed segmentation and integration of the verbal commentary. Experimental results demonstrate our mobile robot creating SLAM and segmented occupancy grid maps of rooms along a 70 metre corridor, and then using these maps to navigate between rooms.


intelligent robots and systems | 2004

Integration of robust visual perception and control for a domestic humanoid robot

Geoffrey R. Taylor; Lindsay Kleeman

This paper describes a complete vision-based framework that enables a humanoid robot to perform simple manipulations in a domestic environment. Our system emphasizes autonomous operation with minimal a priori knowledge in an unstructured environment, with robustness to visual distractions and calibration errors. For each new task, the robot first acquires a dense 3D image of the scene using our novel stereoscopic light stripe scanner that rejects secondary reflections and cross-talk. A data-driven analysis of the range map identifies and models simple objects using geometric primitives. Objects are reliably tracked through clutter and occlusions by exploiting multimodal cues (colour, texture and edges). Finally, manipulations are performed by controlling the end-effector using a hybrid position-based visual servoing scheme that fuses visual and kinematic measurements and compensates for calibration errors. Two domestic tasks are implemented to evaluate the performance of the framework: identifying and grasping a yellow box without any prior knowledge of the object, and pouring rice from an inter-actively selected cup into a bowl.


intelligent robots and systems | 2002

Robust colour and range sensing for robotic applications using a stereoscopic light stripe scanner

Geoffrey R. Taylor; Lindsay Kleeman; Åke Wernersson

This paper presents an integrated, low-level approach to removing sensor noise, cross talk, spurious specular reflections, and solving the association problem in a light stripe scanner. Most single-camera scanners rely on the laser brightness exceeding that of the entire image. Our system uses two cameras to measure the stripe and combines the knowledge of the light plane orientation to produce useful validation properties. The key result is the development of a condition relating image plane measurements and camera intrinsic parameters, which allows validation/association to be performed independently of 3D reconstruction. The same equations are used to improve ranging accuracy compared to single-camera systems. We also show how the system may be self-calibrated using measurements of an arbitrary nonplanar target. As validation allows the operation in ambient light, the registered colour and range are captured in the same sensor. An experimental scanner demonstrates the effectiveness of the proposed techniques.


intelligent robots and systems | 2004

Hybrid position-based visual servoing with online calibration for a humanoid robot

Geoffrey R. Taylor; Lindsay Kleeman

This paper addresses the problem of visual servo control for a humanoid robot in an unstructured domestic environment. The important issues in this application are autonomous planning, robustness to camera and kinematic model errors, large pose errors, occlusions and reliable visual tracking. Conventional image-based or position-based visual servoing schemes do not address these issues, which motivated the proposed hybrid position-based scheme exploiting fusion of visual and kinematic measurements. Kinematic measurements provide robustness to visual distractions, and allow servoing to continue when the end-effector leaves the field of view. Visual measurements provide the complementary benefits of accurate pose tracking and online estimation of the hand-eye transformation for kinematic calibration. Furthermore, it is shown that calibration errors in the focal length and baseline can be approximated as an unknown scale of the end-effector, which can be estimated in the tracking filter to overcome camera calibration errors. The improved accuracy and robustness compared to conventional position-based servoing is demonstrated experimentally.


International Journal of Humanoid Robotics | 2004

MULTI-SENSORY SYNERGIES IN HUMANOID ROBOTICS

R. Andrew Russell; Geoffrey R. Taylor; Lindsay Kleeman; Anies Hannawati Purnamadjaja

Sensing is a key element for any intelligent robotic system. This paper describes the current progress of a project in the Intelligent Robotics Research Center at Monash University that has the aim of developing a synergistic set of sensory systems for a humanoid robot. Currently, sensing modes for colour vision, stereo vision, active range, smell and airflow are being developed in a size and form that is compatible with the humanoid appearance. Essential considerations are sensor calibration and the processing of sensor data to give reliable information about properties of the robots environment. In order to demonstrate the synergistic use of all of the available sensory modes, a high level supervisory control scheme is being developed for the robot. All time-stamped sensor data together with derived information about the robots environment are organized in a blackboard system. Control action sequences are then derived from the blackboard data based on a task description. The paper presents details of each of the robots sensory systems, sensor calibration, and supervisory control. Results are also presented of a demonstration project that involves identifying and selecting mugs containing household chemicals. Proposals for future development of the humanoid robot are also presented.


The International Journal of Robotics Research | 2004

Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration

Geoffrey R. Taylor; Lindsay Kleeman

This paper addresses the problem of rejecting interference due to secondary specular reflections, cross-talk and other mechanisms in an active light stripe scanner for robotic applications. Conventional scanning methods control the environment to ensure the brightness of the stripe exceeds that of all other features. However, this assumption is likely to be violated for a robot operating in an uncontrolled environment. Robust scanning methods already exist, but suffer from problems including assumed scene structure, acquisition delay, lack of error recovery, and incorrect modeling of measurement noise. We propose a robust technique that overcomes these problems, using two cameras and knowledge of the light plane orientation to disambiguate the primary reflection from spurious measurements. Unlike other robust techniques, our validation and reconstruction algorithms are optimal with respect to sensor noise. Furthermore, we propose a procedure to calibrate the system using measurements of an arbitrary non-planar target, providing robust validation independently of ranging accuracy. Finally, our robust techniques allow the sensor to operate in ambient indoor light, allowing color and range to be implicitly registered. An experimental scanner demonstrates the effectiveness of the proposed techniques. Source code and sample data are provided in the multimedia extensions.


Archive | 2003

Fusion of Multimodal Visual Cues for Model-Based Object Tracking

Geoffrey R. Taylor; Lindsay Kleeman


Archive | 2002

Grasping Unknown Objects with a Humanoid Robot

Geoffrey R. Taylor; Lindsay Kleeman


SIP | 2003

Robust Range Data Segmentation using Geometric Primitives for Robotic Applications.

Geoffrey R. Taylor; Lindsay Kleeman


Archive | 2001

Flexible Self-Calibrated Visual Servoing for a Humanoid Robot

Geoffrey R. Taylor; Lindsay Kleeman

Collaboration


Dive into the Geoffrey R. Taylor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Åke Wernersson

Luleå University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge