Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James J. Little is active.

Publication


Featured researches published by James J. Little.


european conference on computer vision | 2004

A Boosted Particle Filter: Multitarget Detection and Tracking

Kenji Okuma; Ali Taleghani; Nando de Freitas; James J. Little; David G. Lowe

The problem of tracking a varying number of non-rigid objects has two major difficulties. First, the observation models and target distributions can be highly non-linear and non-Gaussian. Second, the presence of a large, varying number of objects creates complex interactions with overlap and ambiguities. To surmount these difficulties, we introduce a vision system that is capable of learning, detecting and tracking the objects of interest. The system is demonstrated in the context of tracking hockey players using video sequences. Our approach combines the strengths of two successful algorithms: mixture particle filters and Adaboost. The mixture particle filter [17] is ideally suited to multi-target tracking as it assigns a mixture component to each player. The crucial design issues in mixture particle filters are the choice of the proposal distribution and the treatment of objects leaving and entering the scene. Here, we construct the proposal distribution using a mixture model that incorporates information from the dynamic models of each player and the detection hypotheses generated by Adaboost. The learned Adaboost proposal distribution allows us to quickly detect players entering the scene, while the filtering process enables us to keep track of the individual players. The result of interleaving Adaboost with mixture particle filters is a simple, yet powerful and fully automatic multiple object tracking system.


The International Journal of Robotics Research | 2002

Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks

Stephen Se; David G. Lowe; James J. Little

A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty.


international conference on robotics and automation | 2001

Vision-based mobile robot localization and mapping using scale-invariant features

Stephen Se; David G. Lowe; James J. Little

A key component of a mobile robot system is the ability to localize itself accurately and build a map of the environment simultaneously. In this paper, a vision-based mobile robot localization and mapping algorithm is described which uses scale-invariant image features as landmarks in unmodified dynamic environments. These 3D landmarks are localized and robot ego-motion is estimated by matching them, taking into account the feature viewpoint variation. With our Triclops stereo vision system, experiments show that these features are robustly matched between views, 3D landmarks are tracked, robot pose is estimated and a 3D map is built.


IEEE Transactions on Robotics | 2005

Vision-based global localization and mapping for mobile robots

Stephen Se; David G. Lowe; James J. Little

We have previously developed a mobile robot system which uses scale-invariant visual landmarks to localize and simultaneously build three-dimensional (3-D) maps of unmodified environments. In this paper, we examine global localization, where the robot localizes itself globally, without any prior location estimate. This is achieved by matching distinctive visual landmarks in the current frame to a database map. A Hough transform approach and a RANSAC approach for global localization are compared, showing that RANSAC is much more efficient for matching specific features, but much worse for matching nonspecific features. Moreover, robust global localization can be achieved by matching a small submap of the local region built from multiple frames. This submap alignment algorithm for global localization can be applied to map building, which can be regarded as alignment of multiple 3-D submaps. A global minimization procedure is carried out using the loop closure constraint to avoid the effects of slippage and drift accumulation. Landmark uncertainty is taken into account in the submap alignment and the global minimization process. Experiments show that global localization can be achieved accurately using the scale-invariant landmarks. Our approach of pairwise submap alignment with backward correction in a consistent manner produces a better global 3-D map.


Autonomous Robots | 2000

Using Real-Time Stereo Vision for Mobile Robot Navigation

Don Ray Murray; James J. Little

This paper describes a working vision-based mobile robot that navigates and autonomously explores its environment while building occupancy grid maps of the environment. We present a method for reducing stereo vision disparity images to two-dimensional map information. Stereo vision has several attributes that set it apart from other sensors more commonly used for occupancy grid mapping. We discuss these attributes, the errors that some of them create, and how to overcome them. We reduce errors by segmenting disparity images based on continuous disparity surfaces to reject “spikes” caused by stereo mismatches. Stereo vision processing and map updates are done at 5 Hz and the robot moves at speeds of 300 cm/s.


Biological Cybernetics | 1991

Inverse perspective mapping simplifies optical flow computation and obstacle detection

Hanspeter A. Mallot; Hh Bülthoff; James J. Little; S. Bohrer

We present a scheme for obstacle detection from optical flow which is based on strategies of biological information processing. Optical flow is established by a local “voting” (non-maximum suppression) over the outputs of correlation-type motion detectors similar to those found in the fly visual system. The computational theory of obstacle detection is discussed in terms of space-variances of the motion field. An efficient mechanism for the detection of disturbances in the expected motion field is based on “inverse perspective mapping”, i.e., a coordinate transform or retinotopic mapping applied to the image. It turns out that besides obstacle detection, inverse perspective mapping has additional advantages for regularizing optical flow algorithms. Psychophysical evidence for body-scaled obstacle detection and related neurophysiological results are discussed.


computer vision and pattern recognition | 2007

A Linear Programming Approach for Multiple Object Tracking

Hao Jiang; Sidney S. Fels; James J. Little

We propose a linear programming relaxation scheme for the class of multiple object tracking problems where the inter-object interaction metric is convex and the intra-object term quantifying object state continuity may use any metric. The proposed scheme models object tracking as a multi-path searching problem. It explicitly models track interaction, such as object spatial layout consistency or mutual occlusion, and optimizes multiple object tracks simultaneously. The proposed scheme does not rely on track initialization and complex heuristics. It has much less average complexity than previous efficient exhaustive search methods such as extended dynamic programming and is found to be able to find the global optimum with high probability. We have successfully applied the proposed method to multiple object tracking in video streams.


european conference on computer vision | 2006

Robust visual tracking for multiple targets

Yizheng Cai; Nando de Freitas; James J. Little

We address the problem of robust multi-target tracking within the application of hockey player tracking. The particle filter technique is adopted and modified to fit into the multi-target tracking framework. A rectification technique is employed to find the correspondence between the video frame coordinates and the standard hockey rink coordinates so that the system can compensate for camera motion and improve the dynamics of the players. A global nearest neighbor data association algorithm is introduced to assign boosting detections to the existing tracks for the proposal distribution in particle filters. The mean-shift algorithm is embedded into the particle filter framework to stabilize the trajectories of the targets for robust tracking during mutual occlusion. Experimental results show that our system is able to automatically and robustly track a variable number of targets and correctly maintain their identities regardless of background clutter, camera motion and frequent mutual occlusion between targets.


intelligent robots and systems | 2002

Global localization using distinctive visual features

Stephen Se; David G. Lowe; James J. Little

We have previously developed a mobile robot system which uses scale invariant visual landmarks to localize and simultaneously build a 3D map of the environment In this paper, we look at global localization, also known as the kidnapped robot problem, where the robot localizes itself globally, without any prior location estimate. This is achieved by matching distinctive landmarks in the current frame to a database map. A Hough transform approach and a random sample consensus (RANSAC) approach for global localization are compared, showing that RANSAC is much more efficient. Moreover, robust global localization can be achieved by matching a small sub-map of the local region built from multiple frames.


Robotics and Autonomous Systems | 2008

Curious George: An attentive semantic robot

David Meger; Per-Erik Forssén; Kevin Lai; Scott Helmer; Sancho McCann; Tristram Southey; Matthew A. Baumann; James J. Little; David G. Lowe

State-of-the-art methods have recently achieved impressive performance for recognising the objects present in large databases of pre-collected images. There has been much less focus on building embodied systems that recognise objects present in the real world. This paper describes an intelligent system that attempts to perform robust object recognition in a realistic scenario, where a mobile robot moving through an environment must use the images collected from its camera directly to recognise objects. To perform successful recognition in this scenario, we have chosen a combination of techniques including a peripheral-foveal vision system, an attention system combining bottom-up visual saliency with structure from stereo, and a localisation and mapping technique. The result is a highly capable object recognition system that can be easily trained to locate the objects of interest in an environment, and subsequently build a spatial-semantic map of the region. This capability has been demonstrated during the Semantic Robot Vision Challenge, and is further illustrated with a demonstration of semantic mapping. We also empirically verify that the attention system outperforms an undirected approach even with a significantly lower number of foveations.

Collaboration


Dive into the James J. Little's collaboration.

Top Co-Authors

Avatar

David G. Lowe

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan K. Mackworth

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Elizabeth A. Croft

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Jianhui Chen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Tristram Southey

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Pooja Viswanathan

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Don Ray Murray

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Frederick Tung

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Julieta Martinez

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge