Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Southall is active.

Publication


Featured researches published by Ben Southall.


international conference on robotics and automation | 2001

Real-time vision-based control of a nonholonomic mobile robot

Aveek K. Das; Rafael Fierro; R. Vijay Kumar; Ben Southall; John R. Spletzer; Camillo J. Taylor

This paper considers the problem of vision-based control of a nonholonomic mobile robot. We describe the design and implementation of real-time estimation and control algorithms on a car-like robot platform using a single omni-directional camera as a sensor without explicit use of odometry. We provide experimental results for each of these vision-based control objects. The algorithms are packaged as control modes and can be combined hierarchically to perform higher level tasks involving multiple robots.


The International Journal of Robotics Research | 2002

A Framework and Architecture for Multi-Robot Coordination

Rafael Fierro; Aveek K. Das; John R. Spletzer; Joel M. Esposito; Vijay Kumar; James P. Ostrowski; George J. Pappas; Camillo J. Taylor; Yerang Hur; Rajeev Alur; Insup Lee; Gregory Z. Grudic; Ben Southall

In this paper, we present a framework and the software architecture for the deployment of multiple autonomous robots in an unstructured and unknown environment, with applications ranging from scouting and reconnaissance, to search and rescue, to manipulation tasks, to cooperative localization and mapping, and formation control. Our software framework allows a modular and hierarchical approach to programming deliberative and reactive behaviors in autonomous operation. Formal definitions for sequential composition, hierarchical composition, and parallel composition allow the bottom-up development of complex software systems. We demonstrate the algorithms and software on an experimental testbed that involves a group of carlike robots, each using a single omnidirectional camera as a sensor without explicit use of odometry.


The International Journal of Robotics Research | 2002

An Autonomous Crop Treatment Robot: Part I. A Kalman Filter Model for Localization and Crop/Weed Classification

Ben Southall; Tony Hague; John A. Marchant; Bernard F. Buxton

This work is concerned with a machine vision system for an autonomous vehicle designed to treat horticultural crops. The vehicle navigates by following rows of crop (individual cauliflower plants) that are planted in a reasonably regular array typical of commercial practice. We adopt an extended Kalman filter approach where the observation model consists of a grid which is matched to the crop planting pattern in the perspective view through the vehicle camera. Plant features are extracted by thresholding near infrared images of the scene evolving before the camera. A clustering method collects the features into groups representing single plants. An important aspect of the approach is that it provides both localization information and crop/weed discrimination within a single framework, since we can assume that features not matching the planting pattern are weeds. Off-line tests with two image sequences are carried out to compare the tracking with assessment by three different humans. These show that the extended Kalman filter is a viable method for tracking and that the model parameters derived from the filter are consistent with human assessment. We conclude that the performance will be good enough for accurate in-field navigation.


computer vision and pattern recognition | 2009

Real-time vehicle detection for highway driving

Ben Southall; Mayank Bansal; Jayan Eledath

We present a new multi-stage algorithm for car and truck detection from a moving vehicle. The algorithm performs a search for pertinent features in three dimensions, guided by a ground plane and lane boundary estimation sub-system, and assembles these features into vehicle hypotheses. A number of classifiers are applied to the hypotheses in order to remove false detections. Quantitative analysis on real-world test data show a detection rate of 99.4% and a false positive rate of 1.77%; a result that compares favourably with other systems in the literature.


computer vision and pattern recognition | 2005

Stereo-Based Object Detection, Classi?cation, and Quantitative Evaluation with Automotive Applications

Peng Chang; David Hirvonen; Theodore Camus; Ben Southall

A real-time stereo-based pre-crash object detection and classification system is presented. The system employs a model based stereo object detection algorithm to find candidate objects from the scene, followed by a Bayesian classification framework to assign each candidate to its proper class. Our current system detects and classifies several types of objects commonly seen for automotive applications, namely vehicles, pedestrians/bikes, and poles. We describe both the detection and classification algorithms in detail along with real-time implementation issues. A quantitative analysis of performance on a static data set is also presented.


international conference on robotics and automation | 2011

Rapid multi-robot exploration with topometric maps

Anthony Cowley; Camillo J. Taylor; Ben Southall

Multi-robot map building has advanced to the point where high quality occupancy grid data may be collected by multiple robots collaborating with only intermittent connectivity. However, the tasking of these agents to most efficiently build the map is a problem that has seen less attention. Unfamiliar, highly cluttered environments can confound exploration strategies that rely solely on occupancy grid frontier identification or even semantic classification methods keyed on geometric features. To reason about partial maps of novel, highly cluttered locations, hypotheses about significant structure in the disposition of free space may be used to guide exploration task assignment. A parsing of map data into places with semantic significance to the exploration task provides a foundation from which one may infer an efficient exploration strategy.


international conference on robotics and automation | 2011

A LIDAR streaming architecture for mobile robotics with application to 3D structure characterization

Mayank Bansal; Bogdan Calin Mihai Matei; Ben Southall; Jayan Eledath; Harpreet S. Sawhney

We present a novel LIDAR streaming architecture for real-time, on-board processing using unmanned robots. We propose a two-level 3D data structure that allows pipelined and streaming processing of the 3D data as it arrives from a moving robot: (i) at the coarse level, the incoming 3D scans are stored in memory in a dense 3D voxel grid with a relatively large voxel size - this ensures buffering of the most recent data and the availability of sufficient 3D measurements within a specific processing volume at the next level; (ii) at the fine level, we employ a data chunking mechanism guided by the movement of the robot and a rolling dense 3D voxel grid for processing the data in the immediate vicinity of the robot, which enables reuse of previously computed features. The architecture proposed requires a very small memory footprint, minimal data copying, and allows a fast spatial access for 3D data, even at the finest resolutions. We illustrate the proposed streaming architecture on a real-time 3D structure characterization task for detecting doors and stairs in indoor environments and show qualitative results demonstrating the effectiveness of our approach.


The International Journal of Robotics Research | 2002

An Autonomous Crop Treatment Robot: Part II. Real Time Implementation

Tony Hague; Ben Southall; N.D. Tillett

Implementation of an autonomous vehicle for precision treatment of crop plants is described. The navigation system integrates the vision system described in Part I with inertial and odometric sensing. A modular approach is adopted, where the crop grid observation model is re-formulated as a non-linear compression filter, which combines a set of observations of crop plants into a single pseudo-observation of the position of the crop planting grid relative to the vehicle position. The compression filter encapsulates all internal detail of the vision system (camera calibration, crop layout, etc.). The output from this vision module can be used as an observation in the conventional way by the vehicles navigation Kalman filter. Plant features classified into crop and weed by the vision module are registered into a treatment map. The need to treat targets from a moving platform requires time delays in the processing of observations to be considered. A method of compensation is introduced which allows time delayed vision observations to be used; the effectiveness of the technique is illustrated by an example with an artificially extended time delay. In field trials of the vehicle have been performed, and the accuracy of both vehicle navigation and crop treatment are reported. We conclude that the navigation accuracy falls within the 20 mm root mean square error region thought appropriate for precise horticultural operations, and that spray application is sufficiently accurate for the treatment of individual crop plants.


european conference on computer vision | 2000

On the Performance Characterisation of Image Segmenation Algorithms: A Case Study

Ben Southall; Bernard F. Buxton; John A. Marchant; Tony Hague

An experimental vehicle is being developed for the purposes of precise crop treatment, with the aim of reducing chemical use and thereby improving quality and reducing both costs and environmental contamination. For differential treatment of crop and weed, the vehicle must discriminate between crop, weed and soil. We present a two stage algorithm designed for this purpose, and use this algorithm to illustrate how empirical discrepancy methods, notably the analysis of type I and type II statistical errors and receiver operating characteristic curves, may be used to compare algorithm performance over a set of test images which represent typical working conditions for the vehicle. Analysis of performance is presented for the two stages of the algorithm separately, and also for the combined algorithm. This analysis allows us to understand the effects of various types of misclassification error on the overall algorithm performance, and as such is a valuable methodology for computer vision engineers.


Proceedings of SPIE | 2010

Lidar-based door and stair detection from a mobile robot

Mayank Bansal; Ben Southall; Bogdan Matei; Jayan Eledath; Harpreet S. Sawhney

We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

Collaboration


Dive into the Ben Southall's collaboration.

Top Co-Authors

Avatar

Camillo J. Taylor

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aveek K. Das

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George J. Pappas

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Gregory Z. Grudic

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge