Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Walterio W. Mayol-Cuevas is active.

Publication


Featured researches published by Walterio W. Mayol-Cuevas.


international symposium on visual computing | 2006

Real-time and robust monocular SLAM using predictive multi-resolution descriptors

Denis Chekhlov; Mark Pupilli; Walterio W. Mayol-Cuevas; Andrew D Calway

We describe a robust system for vision-based SLAM using a single camera which runs in real-time, typically around 30 fps. The key contribution is a novel utilisation of multi-resolution descriptors in a coherent top-down framework. The resulting system provides superior performance over previous methods in terms of robustness to erratic motion, camera shake, and the ability to recover from measurement loss. SLAM itself is implemented within an unscented Kalman filter framework based on a constant position motion model, which is also shown to provide further resilience to non-smooth camera motion. Results are presented illustrating successful SLAM operation for challenging hand-held camera movement within desktop environments.


IEEE Transactions on Robotics | 2008

Discovering Higher Level Structure in Visual SLAM

Andrew P. Gee; Denis Chekhlov; Andrew D Calway; Walterio W. Mayol-Cuevas

In this paper, we describe a novel method for discovering and incorporating higher level map structure in a real-time visual simultaneous localization and mapping (SLAM) system. Previous approaches use sparse maps populated by isolated features such as 3-D points or edgelets. Although this facilitates efficient localization, it yields very limited scene representation and ignores the inherent redundancy among features resulting from physical structure in the scene. In this paper, higher level structure, in the form of lines and surfaces, is discovered concurrently with SLAM operation, and then, incorporated into the map in a rigorous manner, attempting to maintain important cross-covariance information and allow consistent update of the feature parameters. This is achieved by using a bottom-up process, in which subsets of low-level features are ldquofolded inrdquo to a parameterization of an associated higher level feature, thus collapsing the state space as well as building structure into the map. We demonstrate and analyze the effects of the approach for the cases of line and plane discovery, both in simulation and within a real-time system operating with a handheld camera in an office environment.


intelligent robots and systems | 2012

Egocentric Real-time Workspace Monitoring using an RGB-D camera

Dima Damen; Andrew P. Gee; Walterio W. Mayol-Cuevas; Andrew D Calway

We describe an integrated system for personal workspace monitoring based around an RGB-D sensor. The approach is egocentric, facilitating full flexibility, and operates in real-time, providing object detection and recognition, and 3D trajectory estimation whilst the user undertakes tasks in the workspace. A prototype on-body system developed in the context of work-flow analysis for industrial manipulation and assembly tasks is described. The system is evaluated on two tasks with multiple users, and results indicate that the method is effective, giving good accuracy performance.


international symposium on mixed and augmented reality | 2008

OutlinAR: an assisted interactive model building system with reduced computational effort

Pished Bunnun; Walterio W. Mayol-Cuevas

This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.


international symposium on mixed and augmented reality | 2007

Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM

Denis Chekhlov; Andrew P. Gee; Andrew D Calway; Walterio W. Mayol-Cuevas

Most work in visual augmented reality (AR) employs predefined markers or models that simplify the algorithms needed for sensor positioning and augmentation but at the cost of imposing restrictions on the areas of operation and on interactivity. This paper presents a simple game in which an AR agent has to navigate using real planar surfaces on objects that are dynamically added to an unprepared environment. An extended Kalman filter (EKF) simultaneous localisation and mapping (SLAM) framework with automatic plane discovery is used to enable the player to interactively build a structured map of the game environment using a single, agile camera. By using SLAM, we are able to achieve real-time interactivity and maintain rigorous estimates of the systems uncertainty, which enables the effects of high quality estimates to be propagated to other features (points and planes) even if they are outside the cameras current field of view.


Association for Computing Machinery (ACM) | 2012

Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems

Abe A Karnik; Walterio W. Mayol-Cuevas; Sriram Subramanian

We present MUSTARD, a multi-user dynamic random hole see-through display, capable of delivering viewer dependent information for objects behind a glass cabinet. Multiple viewers are allowed to observe both the physical object(s) being augmented and their location dependent annotations at the same time. The system consists of two liquid-crystal (LC) panels within which physical objects can be placed. The back LC panel serves as a dynamic mask while the front panel serves as the data. We first describe the principle of MUSTARD and then examine various functions that can be used to minimize crosstalk between multiple viewer positions. We compare different conflict management strategies using PSNR and the quality mean opinion score of HDR-VDP2. Finally, through a user-study we show that users can clearly identify images and objects even when the images are shown with strong conflicting regions; demonstrating that our system works even in the most extreme of circumstances.


international conference on human-computer interaction | 2009

In-Situ 3D Indoor Modeler with a Camera and Self-contained Sensors

Tomoya Ishikawa; Kalaivani Thangamani; Masakatsu Kourogi; Andrew P. Gee; Walterio W. Mayol-Cuevas; Keechul Jung; Takeshi Kurata

We propose a 3D modeler for supporting in-situ indoor modeling effectively. The modeler allows a user easily to create models from a single photo by interaction techniques taking advantage of features in indoor space and visualization techniques. In order to integrate the models, the modeler provides automatic integration functions using Visual SLAM and pedestrian dead-reckoning (PDR), and interactive tools to modify the result. Moreover, for preventing shortage of texture images to be used for the models, our modeler automatically searches from 3D models created by the user for un-textured regions and intuitively visualizes shooting positions to take a photo for the regions. These functions make it possible that the user easily create photorealistic indoor 3D models that have enough textures on the fly.


british machine vision conference | 2012

Real-time Learning and Detection of 3D Texture-less Objects: A Scalable Approach

Dima Damen; Pished Bunnun; Andrew D Calway; Walterio W. Mayol-Cuevas

The goal of this paper is to evaluate several extensions of Wei and Levoys algorithm for the synthesis of laminar volumetric textures constrained only by a single 2D sample. Hence, we shall also review in a unified form the improved algorithm proposed by Kopf et al. and the particular histogram matching approach of Chen and Wang. Developing a genuine quantitative study we are able to compare the performances of these algorithms that we have applied to the synthesis of volumetric structures of dense carbons. The 2D samples are lattice fringe images obtained by high resolution transmission electronic microscopy (HRTEM).We present a method for the learning and detection of multiple rigid texture-less 3D objects intended to operate at frame rate speeds for video input. The method is geared for fast and scalable learning and detection by combining tractable extraction of edgelet constellations with library lookup based on rotationand scale-invariant descriptors. The approach learns object views in real-time, and is generative enabling more objects to be learnt without the need for re-training. During testing, a random sample of edgelet constellations is tested for the presence of known objects. We perform testing of single and multi-object detection on a 30 objects dataset showing detections of any of them within milliseconds from the object’s visibility. The results show the scalability of the approach and its framerate performance.


computer vision and pattern recognition | 2009

SUSurE: Speeded Up Surround Extrema feature detector and descriptor for realtime applications

Mosalam Ebrahimi; Walterio W. Mayol-Cuevas

There has been significant research into the development of visual feature detectors and descriptors that are robust to a number of image deformations. Some of these methods have emphasized the need to improve on computational speed and compact representations so that they can enable a range of real-time applications with reduced computational requirements. In this paper we present modified detectors and descriptors based on the recently introduced CenSurE [1], and show experimental results that aim to highlight the computational savings that can be made with limited reduction in performance. The developed methods are based on exploiting the concept of sparse sampling which may be of interest to a range of other existing approaches.


international conference on robotics and automation | 2013

Fast place recognition with plane-based maps

Eduardo Fernandez-Moral; Walterio W. Mayol-Cuevas; Vicente Arévalo; Javier Gonzalez-Jimenez

This paper presents a new method for recognizing places in indoor environments based on the extraction of planar regions from range data provided by a hand-held RGB-D sensor. We propose to build a plane-based map (PbMap) consisting of a set of 3D planar patches described by simple geometric features (normal vector, centroid, area, etc.). This world representation is organized as a graph where the nodes represent the planar patches and the edges connect planes that are close by. This map structure permits to efficiently select subgraphs representing the local neighborhood of observed planes, that will be compared with other subgraphs corresponding to local neighborhoods of planes acquired previously. To find a candidate match between two subgraphs we employ an interpretation tree that permits working with partially observed and missing planes. The candidates from the interpretation tree are further checked out by a rigid registration test, which also gives us the relative pose between the matched places. The experimental results indicate that the proposed approach is an efficient way to solve this problem, working satisfactorily even when there are substantial changes in the scene (lifelong maps).

Collaboration


Dive into the Walterio W. Mayol-Cuevas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Contreras

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge