Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Meger is active.

Publication


Featured researches published by David Meger.


Robotics and Autonomous Systems | 2008

Curious George: An attentive semantic robot

David Meger; Per-Erik Forssén; Kevin Lai; Scott Helmer; Sancho McCann; Tristram Southey; Matthew A. Baumann; James J. Little; David G. Lowe

State-of-the-art methods have recently achieved impressive performance for recognising the objects present in large databases of pre-collected images. There has been much less focus on building embodied systems that recognise objects present in the real world. This paper describes an intelligent system that attempts to perform robust object recognition in a realistic scenario, where a mobile robot moving through an environment must use the images collected from its camera directly to recognise objects. To perform successful recognition in this scenario, we have chosen a combination of techniques including a peripheral-foveal vision system, an attention system combining bottom-up visual saliency with structure from stereo, and a localisation and mapping technique. The result is a highly capable object recognition system that can be easily trained to locate the objects of interest in an environment, and subsequently build a spatial-semantic map of the region. This capability has been demonstrated during the Semantic Robot Vision Challenge, and is further illustrated with a demonstration of semantic mapping. We also empirically verify that the attention system outperforms an undirected approach even with a significantly lower number of foveations.


Robotics and Autonomous Systems | 2006

Simultaneous Planning, Localization, and Mapping in a Camera Sensor Network

Ioannis M. Rekleitis; David Meger; Gregory Dudek

Abstract In this paper we examine issues of localization, exploration, and planning in the context of a hybrid robot/camera-network system. We exploit the ubiquity of camera networks to use them as a source of localization data. Since the Cartesian position of the cameras in most networks is not known accurately, we consider the issue of how to localize such cameras. To solve this hybrid localization problem, we divide it into a local problem of camera-parameter estimation combined with a global planning and navigation problem. We solve the local camera-calibration problem by using fiducial markers attached to the robot and by selecting robot trajectories in front of each camera that provide good calibration and field-of-view accuracy. We propagate information among the cameras and the successive positions of the robot using an Extended Kalman filter. Finally, we move the robot between the camera positions to explore the network using heuristic exploration strategies. The paper includes experimental data from an indoor office environment as well as tests on simulated data sets.


international conference on robotics and automation | 2008

Informed visual search: Combining attention and object recognition

Per-Erik Forssén; David Meger; Kevin Lai; Scott Helmer; James J. Little; David G. Lowe

This paper studies the sequential object recognition problem faced by a mobile robot searching for specific objects within a cluttered environment. In contrast to current state-of-the-art object recognition solutions which are evaluated on databases of static images, the system described in this paper employs an active strategy based on identifying potential objects using an attention mechanism and planning to obtain images of these objects from numerous viewpoints. We demonstrate the use of a bag-of-features technique for ranking potential objects, and show that this measure outperforms geometric matching for invariance across viewpoints. Our system implements informed visual search by prioritising map locations and re-examining promising locations first. Experimental results demonstrate that our system is a highly competent object recognition system that is capable of locating numerous challenging objects amongst distractors.


canadian conference on computer and robot vision | 2009

Automated Spatial-Semantic Modeling with Applications to Place Labeling and Informed Search

Pooja Viswanathan; David Meger; Tristram Southey; James J. Little; Alan K. Mackworth

This paper presents a spatial-semantic modeling system featuringautomated learning of object-place relations from an online annotateddatabase, and the application of these relations to a variety ofreal-world tasks. The system is able to label novel scenes with placeinformation, as we demonstrate on test scenes drawn from the same sourceas our training set. We have designed our system for future enhancementof a robot platform that performs state-of-the-art object recognitionand creates object maps of realistic environments. In this context, wedemonstrate the use of spatial-semantic information to performclustering and place labeling of object maps obtained from real homes.This place information is fed back into the robot system to inform anobject search planner about likely locations of a query object. As awhole, this system represents a new level in spatial reasoning andsemantic understanding for a physical platform.


british machine vision conference | 2012

Fine-grained Categorization for 3D Scene Understanding

Michael Stark; Jonathan Krause; Bojan Pepik; David Meger; James J. Little; Bernt Schiele; Daphne Koller

Fine-grained categorization of object classes is receiving increased attention, since it promises to automate classification tasks that are difficult even for humans, such as the distinction between different animal species. In this paper, we consider fine-grained categorization for a different reason: following the intuition that fine-grained categories encode metric information, we aim to generate metric constraints from fine-grained category predictions, for the benefit of 3D scene-understanding. To that end, we propose two novel methods for fine-grained classification, both based on part information, as well as a new fine-grained category data set of car types. We demonstrate superior performance of our methods to state-of-the-art classifiers, and show first promising results for estimating the depth of objects from fine-grained category predictions from a monocular camera.


international conference on robotics and automation | 2010

Viewpoint detection models for sequential embodied object category recognition

David Meger; Ankur Gupta; James J. Little

This paper proposes a method for learning viewpoint detection models for object categories that facilitate sequential object category recognition and viewpoint planning. We have examined such models for several state-of-the-art object detection methods. Our learning procedure has been evaluated using an exhaustive multiview category database recently collected for multiview category recognition research. Our approach has been evaluated on a simulator that is based on real images that have previously been collected. Simulation results verify that our viewpoint planning approach requires fewer viewpoints for confident recognition. Finally, we illustrate the applicability of our method as a component of a completely autonomous visual recognition platform that has previously been demonstrated in an object category recognition competition.


asian conference on computer vision | 2010

Multiple viewpoint recognition and localization

Scott Helmer; David Meger; Marius Muja; James J. Little; David G. Lowe

This paper presents a novel approach for labeling objects based on multiple spatially-registered images of a scene. We argue that such a multi-view labeling approach is a better fit for applications such as robotics and surveillance than traditional object recognition where only a single image of each scene is available. To encourage further study in the area, we have collected a data set of well-registered imagery for many indoor scenes and have made this data publicly available. Our multiview labeling approach is capable of improving the results of a wide variety of image-based classifiers, and we demonstrate this by producing scene labelings based on the output of both the Deformable Parts Model of [1] as well as a method for recognizing object contours which is similar to chamfer matching. Our experimental results show that labeling objects based on multiple viewpoints leads to a significant improvement in performance when compared with single image labeling.


british machine vision conference | 2011

Explicit Occlusion Reasoning for 3D Object Detection

David Meger; Christian Wojek; Bernt Schiele; James J. Little

Consider the problem of recognizing an object that is partially occluded in an image. The visible portions are likely to match learned appearance models for the object, but hidden portions will not. The (hypothetical) ideal system would consider only the visible object information, correctly ignoring all occluded regions. In purely 2D recognition, this requires inferring the occlusion present, which is a significant challenge since the number of possible occlusion masks is, in principle, exponential. We simplify the problem, considering only a small subset of the most likely occlusions (top, bottom, left, and right halves) and noting that some mismatch is tolerable. We train partial-object detectors tailored exactly to each of these few cases. In addition, we reason about objects in 3D and incorporate sensed geometry, as from an RGB-depth camera, along with visual imagery. This allows explicit occlusion masks to be constructed for each object hypothesis. The masks specify how much to trust each partial template, based on their overlap with visible object regions. Only the visible evidence contributes to our object reasoning.


canadian conference on computer and robot vision | 2010

Curious George: An Integrated Visual Search Platform

David Meger; Marius Muja; Scott Helmer; Ankur Gupta; Catherine Gamroth; Tomas Hoffman; Matthew A. Baumann; Tristram Southey; Pooyan Fazli; Walter Wohlkinger; Pooja Viswanathan; James J. Little; David G. Lowe; James Orwell

This paper describes an integrated robot system, known as Curious George, that has demonstrated state-of-the-art capabilities to recognize objects in the real world. We describe the capabilities of this system, including: the ability to access web-based training data automatically and in near real-time, the ability to model the visual appearance and 3D shape of a wide variety of object categories, navigation abilities such as exploration, mapping and path following, the ability to decompose the environment based on 3D structure, allowing for attention to be focused on regions of interest, the ability to capture high-quality images of objects in the environment, and finally, the ability to correctly label those objects with high accuracy. The competence of the combined system has been validated by entry into an international competition where Curious George has been among the top performing systems each year. We discuss the implications of such successful object recognition for society, and provide several avenues for potential improvement.


intelligent robots and systems | 2008

Heuristic search planning to reduce exploration uncertainty

David Meger; Ioannis M. Rekleitis; Gregory Dudek

The path followed by a mobile robot while mapping an environment (i.e. an exploration trajectory) plays a large role in determining the efficiency of the mapping process and the accuracy of any resulting metric map of the environment. This paper examines some important aspects of path planning in this context: the trade-offs between the speed of the exploration process versus the accuracy of resulting maps; and alternating between exploration of new territory and planning through known maps. The resulting motion planning strategy and associated heuristic are targeted to a robot building a map of an environment assisted by a Sensor Network composed of uncalibrated monocular cameras. An adaptive heuristic exploration strategy based on A* search over a combined distance and uncertainty cost function allows for adaptation to the environment and improvement in mapping accuracy. We assess the technique using an illustrative experiment in a real environment and a set of simulations in a parametric family of idealized environments.

Collaboration


Dive into the David Meger's collaboration.

Top Co-Authors

Avatar

James J. Little

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Helmer

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David G. Lowe

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Tristram Southey

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sancho McCann

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ioannis M. Rekleitis

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge