Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew P. Gee is active.

Publication


Featured researches published by Andrew P. Gee.


IEEE Transactions on Robotics | 2008

Discovering Higher Level Structure in Visual SLAM

Andrew P. Gee; Denis Chekhlov; Andrew D Calway; Walterio W. Mayol-Cuevas

In this paper, we describe a novel method for discovering and incorporating higher level map structure in a real-time visual simultaneous localization and mapping (SLAM) system. Previous approaches use sparse maps populated by isolated features such as 3-D points or edgelets. Although this facilitates efficient localization, it yields very limited scene representation and ignores the inherent redundancy among features resulting from physical structure in the scene. In this paper, higher level structure, in the form of lines and surfaces, is discovered concurrently with SLAM operation, and then, incorporated into the map in a rigorous manner, attempting to maintain important cross-covariance information and allow consistent update of the feature parameters. This is achieved by using a bottom-up process, in which subsets of low-level features are ldquofolded inrdquo to a parameterization of an associated higher level feature, thus collapsing the state space as well as building structure into the map. We demonstrate and analyze the effects of the approach for the cases of line and plane discovery, both in simulation and within a real-time system operating with a handheld camera in an office environment.


intelligent robots and systems | 2012

Egocentric Real-time Workspace Monitoring using an RGB-D camera

Dima Damen; Andrew P. Gee; Walterio W. Mayol-Cuevas; Andrew D Calway

We describe an integrated system for personal workspace monitoring based around an RGB-D sensor. The approach is egocentric, facilitating full flexibility, and operates in real-time, providing object detection and recognition, and 3D trajectory estimation whilst the user undertakes tasks in the workspace. A prototype on-body system developed in the context of work-flow analysis for industrial manipulation and assembly tasks is described. The system is evaluated on two tasks with multiple users, and results indicate that the method is effective, giving good accuracy performance.


international symposium on mixed and augmented reality | 2007

Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM

Denis Chekhlov; Andrew P. Gee; Andrew D Calway; Walterio W. Mayol-Cuevas

Most work in visual augmented reality (AR) employs predefined markers or models that simplify the algorithms needed for sensor positioning and augmentation but at the cost of imposing restrictions on the areas of operation and on interactivity. This paper presents a simple game in which an AR agent has to navigate using real planar surfaces on objects that are dynamically added to an unprepared environment. An extended Kalman filter (EKF) simultaneous localisation and mapping (SLAM) framework with automatic plane discovery is used to enable the player to interactively build a structured map of the game environment using a single, agile camera. By using SLAM, we are able to achieve real-time interactivity and maintain rigorous estimates of the systems uncertainty, which enables the effects of high quality estimates to be propagated to other features (points and planes) even if they are outside the cameras current field of view.


international conference on human-computer interaction | 2009

In-Situ 3D Indoor Modeler with a Camera and Self-contained Sensors

Tomoya Ishikawa; Kalaivani Thangamani; Masakatsu Kourogi; Andrew P. Gee; Walterio W. Mayol-Cuevas; Keechul Jung; Takeshi Kurata

We propose a 3D modeler for supporting in-situ indoor modeling effectively. The modeler allows a user easily to create models from a single photo by interaction techniques taking advantage of features in indoor space and visualization techniques. In order to integrate the models, the modeler provides automatic integration functions using Visual SLAM and pedestrian dead-reckoning (PDR), and interactive tools to modify the result. Moreover, for preventing shortage of texture images to be used for the models, our modeler automatically searches from 3D models created by the user for un-textured regions and intuitively visualizes shooting positions to take a photo for the regions. These functions make it possible that the user easily create photorealistic indoor 3D models that have enough textures on the fly.


british machine vision conference | 2007

Discovering Planes and Collapsing the State Space in Visual SLAM

Andrew P. Gee; Denis Chekhlov; Walterio W. Mayol-Cuevas; Andrew D Calway

Recent advances in real-time visual SLAM have been based primarily on mapping isolated 3-D points. This presents difficulties when seeking to extend operation to wide areas, as the system state becomes large, requiring increasing computational effort. In this paper we present a novel approach to this problem in which planar structural components are embedded within the state to represent mapped points lying on a common plane. This collapses the state size, reducing computation and improving scalability, as well as giving a higher level scene description. Critically, the plane parameters are augmented into the SLAM state in a proper fashion, maintaining inherent uncertainties via a full covariance representation. Results for simulated data and for real-time operation demonstrate that the approach is effective.


international symposium on visual computing | 2006

Real-time model-based SLAM using line segments

Andrew P. Gee; Walterio W. Mayol-Cuevas

Existing monocular vision-based SLAM systems favour interest point features as landmarks, but these are easily occluded and can only be reliably matched over a narrow range of viewpoints. Line segments offer an interesting alternative, as line matching is more stable with respect to viewpoint changes and lines are robust to partial occlusion. In this paper we present a model-based SLAM system that uses 3D line segments as landmarks. Unscented Kalman filters are used to initialise new line segments and generate a 3D wireframe model of the scene that can be tracked with a robust model-based tracking algorithm. Uncertainties in the camera position are fed into the initialisation of new model edges. Results show the system operating in real-time with resilience to partial occlusion. The maps of line segments generated during the SLAM process are physically meaningful and their structure is measured against the true 3D structure of the scene.


PLOS ONE | 2015

Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks

Gabriele Bleser; Dima Damen; Ardhendu Behera; Gustaf Hendeby; Katharina Mura; Markus Miezal; Andrew P. Gee; Nils Petersen; Gustavo Maçães; Hugo Domingues; Dominic Gorecky; Luis Almeida; Walterio W. Mayol-Cuevas; Andrew D Calway; Anthony G. Cohn; David C. Hogg; Didier Stricker

Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.


Advanced Robotics | 2013

Real-time 3D simultaneous localization and map-building for a dynamic walking humanoid robot

Sukjune Yoon; Seungyong Hyung; Minhyung Lee; Kyung Shik Roh; Sunghwan Ahn; Andrew P. Gee; Pished Bunnun; Andrew D Calway; Walterio W. Mayol-Cuevas

In this paper, we develop an onboard real-time 3D visual simultaneous localization and mapping system for a dynamic walking humanoid robot. With the constraints of processing and real-time operation, the system uses a lightweight localization and mapping approach based around the well-known extended Kalman filter but that features a robust and real-time relocalization system able to allow loop-closing and robust localization in 6D. The robot is controlled by torque references at the joints using its dynamic properties. This results in more energy efficient motion but also in lager movement than the one found in a conventional ZMP-based humanoid which carefully maintains the position of the center of mass on the plane. These more agile motions pose challenges for a visual mapping system having to operate in real time. The developed system features a combination of stereo camera, robust visual descriptors, and motion model switching to compensate for the larger motion and uncertainty. We provide practical implementation details of the system and methods, and test on the real humanoid robot. We compare our results with motion obtained with a motion capture system.


international conference on computer vision | 2010

Visual mapping and multi-modal localisation for anywhere AR authoring

Andrew P. Gee; Andrew D Calway; Walterio W. Mayol-Cuevas

This paper presents an Augmented Reality system that combines a range of localisation technologies that include GPS, UWB, user input and Visual SLAM to enable both retrieval and creation of annotations in most places. The system works for multiple users and enables sharing and visualizations of annotations with a control centre. The process is divided into two main steps i) global localisation and ii) 6D local mapping. For the case of visual relocalisation we develop and evaluate a method to rank local maps which improves performance over previous art. We demonstrate the system working over a wide area and for a range of environments.


Lecture Notes in Computer Science | 2006

Real-Time Model-Based SLAM Using Line Segments

Andrew P. Gee; Walterio W. Mayol-Cuevas

Collaboration


Dive into the Andrew P. Gee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalaivani Thangamani

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masakatsu Kourogi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tomoya Ishikawa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge