Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J.R. Beveridge is active.

Publication


Featured researches published by J.R. Beveridge.


IEEE Computer | 2003

High-level language abstraction for reconfigurable computing

Walid A. Najjar; Willem A. P. Bohm; Bruce A. Draper; Jeffrey Hammes; Robert G. Rinker; J.R. Beveridge; Monica Chawathe; Charlie Ross

RC systems typically consist of an array of configurable computing elements. The computational granularity of these elements ranges from simple gates - as abstracted by FPGA lookup tables - to complete arithmetic-logic units with or without registers. A rich programmable interconnect completes the array. RC system developer manually partitions an application into two segments: a hardware component in a hardware description language such as VHDL or Verilog that will execute as a circuit on the FPGA and a software component that will execute as a program on the host. Single-assignment C is a C language variant designed to create an automated compilation path from an algorithmic programming language to an FPGA-based reconfigurable computing system.


IEEE Transactions on Image Processing | 2003

Accelerated image processing on FPGAs

Bruce A. Draper; J.R. Beveridge; A.P.W. Bohm; Charlie Ross; Monica Chawathe

The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

How easy is matching 2D line models using local search

J.R. Beveridge; Edward M. Riseman

Local search is a well established and highly effective method for solving complex combinatorial optimization problems. Here, local search is adapted to solve difficult geometric matching problems. Matching is posed as the problem of finding the optimal many-to-many correspondence mapping between a line segment model and image line segments. Image data is assumed to be fragmented, noisy, and cluttered. The algorithms presented have been used for robot navigation, photo interpretation, and scene understanding. This paper explores how local search performs as model complexity increases, image clutter increases, and additional model instances are added to the image data. Expected run-times to find optimal matches with 95 percent confidence are determined for 48 distinct problems involving six models. Nonlinear regression is used to estimate run-time growth as a function of problem size. Both polynomial and exponential growth models are fit to the run-time data. For problems with random clutter, the polynomial model fits better and growth is comparable to that for tree search. For problems involving symmetric models and multiple model instances, where tree search is exponential, the polynomial growth model is superior to the exponential growth model for one search algorithm and comparable for another.


IEEE Computer | 1989

ISR: a database for symbolic processing in computer vision

John Brolio; Bruce A. Draper; J.R. Beveridge; Allen R. Hanson

ISR (international symbolic representation), a representation and management system for use at the intermediate (symbolic) level of vision, is described. ISR mediates access to intermediate-level vision data and forms an active interface to the higher-level inference processes that construct an images interpretation. The system supports important types of data and operations and can be adapted to the changing needs of ongoing research. It provides a centralized data representation that supports integration of results from multiple avenues of research into the overall vision system. ISRs underlying computational paradigm is explained, database requirements for image interpretation are identified, the ISR data management system is described, and the use of ISR is discussed.<<ETX>>


IEEE Transactions on Image Processing | 1997

Precise matching of 3-D target models to multisensor data

M.R. Stevens; J.R. Beveridge

This paper presents a three-dimensional (3-D) model-based ATR algorithm that operates simultaneously on imagery from three heterogeneous, approximately boresight aligned sensors. An iterative search matches models to range and optical imagery by repeatedly predicting detectable features, measuring support for these features in the imagery, and adjusting the transformations relating the target to the sensors in order to improve the match. The result is a locally optimal and globally consistent set of 3-D transformations that precisely relate the best matching target features to combined range, IR, and color images. Results show the multisensor algorithm recovers 3-D target pose more accurately than does a traditional single-sensor algorithm. Errors in registration between images are also corrected during matching. The intended application is imaging from semiautonomous military scout vehicles.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Principal Angles Separate Subject Illumination Spaces in YDB and CMU-PIE

J.R. Beveridge; Bruce A. Draper; Jen-Mei Chang; Michael Kirby; Holger Kley; Chris Peterson

The theory of illumination subspaces is well developed and has been tested extensively on the Yale Face Database B (YDB) and CMU-PIE (PIE) data sets. This paper shows that if face recognition under varying illumination is cast as a problem of matching sets of images to sets of images, then the minimal principal angle between subspaces is sufficient to perfectly separate matching pairs of image sets from nonmatching pairs of image sets sampled from YDB and PIE. This is true even for subspaces estimated from as few as six images and when one of the subspaces is estimated from as few as three images if the second subspace is estimated from a larger set (10 or more). This suggests that variation under illumination may be thought of as useful discriminating information rather than unwanted noise.


workshop on applications of computer vision | 2000

Augmented geophysical data interpretation through automated velocity picking in semblance velocity images

J.R. Beveridge; C. Ross; D. Whitley; B. Fish

Abstract. Velocity picking is the problem of picking velocity–time pairs based on a coherence metric between multiple seismic signals. Coherence as a function of velocity and time can be expressed as a 2D color semblance velocity image. Currently, humans pick velocities by looking at the semblance velocity image; this process can take days or even weeks to complete for a seismic survey. The problem can be posed as a geometric feature-matching problem. A feature extraction algorithm can recognize islands (peaks) of maximum semblance in the semblance velocity image: a heuristic combinatorial matching process can then be used to find a subset of peaks that maximizes the coherence metric. The peaks define a polyline through the image, and coherence is measured in terms of the summed velocity under the polyline and the smoothness of the polyline. Our best algorithm includes a constraint favoring solutions near the median solution for the local area under consideration. First, each image is processed independently. Then, a second pass of optimization includes proximity to the median as an additional optimization criterion. Our results are similar to those produced by human experts.


international conference on pattern recognition | 1996

Interleaving 3D model feature prediction and matching to support multi-sensor object recognition

Mark R. Stevens; J.R. Beveridge

The object recognition, system presented combines on-line feature prediction with an iterative multisensor matching algorithm. Matching begins with an initial object type and pose hypothesis. An iterative generate-and-test procedure then refines the pose as well as the sensor-to-sensor registration for separate range and electro optical sensors. During matching, object features predicted to be visible are updated to reflect changes in hypothesized object pose and sensor registration. The match found is locally optimal in terms of the complete space of possible matches and globally consistent in the sense of preserving the 3D constraints implied by sensor and object geometry. Results on real data are presented which demonstrate the algorithm correcting for up to 30/spl deg/ errors in initial orientation and 5 m errors in initial translation.


international conference on pattern recognition | 2006

A Comparison of Pixel, Edge andWavelet Features for Face Detection using a Semi-Naive Bayesian Classifier

J.R. Beveridge; J. Saraf; B. Randall

Henry Schneiderman at Carnegie Mellon University developed a face detection algorithm based upon a semi-naive Bayesian classifier and 5/3 linear phase wavelets. This paper explores the relative value of these wavelet features compared to simpler pixel and edge features. Experiments suggest edge features are superior for highly controlled lighting, while pixel features are better and more stable for uncontrolled lighting. Tests use the Notre Dame face data collected in Fall 2003 and Spring 2004 and use over 400, 000 face and non-face test image chips


international conference on pattern recognition | 2000

Pose from color

M.R. Stevens; Bruce A. Draper; J.R. Beveridge

Color is a powerful cue for determining the pose of 3D objects viewed by a single camera. We present a method based on hue histograms for locating multiple objects with respect to a fixed camera. The algorithm is tested on controlled blocks-world scenes and an indoor office scene. On these examples, a pose refinement algorithm performs better when guided by color than when guided by more traditional edge information.

Collaboration


Dive into the J.R. Beveridge's collaboration.

Top Co-Authors

Avatar

Bruce A. Draper

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Charlie Ross

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Edward M. Riseman

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Monica Chawathe

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

A.P.W. Bohm

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Allen R. Hanson

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Randall

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

C. Ross

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Chris Peterson

Colorado State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge