Beau J. Tippetts
Brigham Young University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Beau J. Tippetts.
Journal of Real-time Image Processing | 2016
Beau J. Tippetts; Dah-Jye Lee; Kirt D. Lillywhite; James K. Archibald
A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. This work provides a comprehensive review of stereo vision algorithms with specific emphasis on real-time performance to identify those suitable for resource-limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. Algorithms are grouped into three categories: (1) those that have published results of real-time or near real-time performance on standard processors, (2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and (3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource-limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.
computational intelligence in robotics and automation | 2007
Spencer G. Fowers; Dah-Jye Lee; Beau J. Tippetts; Kirt D. Lillywhite; Aaron W. Dennis; James K. Archibald
Micro Unmanned Air Vehicles are well suited for a wide variety of applications in agriculture, homeland security, military, search and rescue, and surveillance. In response to these opportunities, a quad-rotor micro UAV has been developed at the Robotic Vision Lab at Brigham Young University. The quad-rotor UAV uses a custom, low-power FPGA platform to perform computationally intensive vision processing tasks on board the vehicle, eliminating the need for wireless tethers and computational support on ground stations. Drift stabilization of the UAV has been implemented using Harris feature detection and template matching running in real-time in hardware on the on-board FPGA platform, allowing the quad-rotor to maintain a stable and almost drift-free hover without human intervention.
Pattern Recognition | 2013
Kirt D. Lillywhite; Dah-Jye Lee; Beau J. Tippetts; James K. Archibald
This paper presents a novel approach for object detection using a feature construction method called Evolution-COnstructed (ECO) features. Most other object recognition approaches rely on human experts to construct features. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, and no limitations to certain types of image sources. We show in our experiments that ECO features perform better or comparable with hand-crafted state-of-the-art object recognition algorithms. An analysis is given of ECO features which includes a visualization of ECO features and improvements made to the algorithm.
IEEE Transactions on Circuits and Systems for Video Technology | 2011
Beau J. Tippetts; Dah-Jye Lee; James K. Archibald; Kirt D. Lillywhite
It is evident that the accuracy of stereo vision algorithms has continued to increase based on commonly used quantitative evaluations of the resulting disparity maps. This paper focuses on the development of promising stereo vision algorithms that efficiently tradeoff accuracy for large reductions in required computational resources. An intensity profile shape-matching algorithm is introduced as an example of an algorithm that makes such tradeoffs. The proposed algorithm is compared to both a basic sum-of-absolute-differences (SAD) block-matching algorithm, as well as a stereo vision algorithm that is highly ranked for its accuracy based on the Middlebury evaluation criteria. This comparison shows that the proposed algorithms accuracy on the commonly used Tsukuba stereo image pair is lower than many published stereo vision algorithms, but that for unrectified stereo image pairs that have even the slightest differences in brightness, it is potentially more robust than algorithms that rely on SAD block matching. An example application that requires 3-D information is implemented to show that the accuracy of the proposed algorithm is sufficient for this use. Timing results show that this is a very fast dense-disparity stereo vision algorithm when compared to other algorithms capable of running on a standard microprocessor.
international symposium on visual computing | 2007
Beau J. Tippetts; Spencer G. Fowers; Kirt D. Lillywhite; Dah-Jye Lee; James K. Archibald
An efficient algorithm to detect, correlate, and track features in a scene was implemented on an FPGA in order to obtain real-time performance. The algorithm implemented was a Harris Feature Detector combined with a correlator based on a priority queue of feature strengths that considered minimum distances between features. The remaining processing of frame to frame movement is completed in software to determine an affine homography including translation, rotation, and scaling. A RANSAC method is used to remove mismatched features and increase accuracy. This implementation was designed specifically for use as an onboard vision solution in determining movement of small unmanned air vehicles that have size, weight, and power limitations.
Pattern Recognition | 2012
Kirt D. Lillywhite; Beau J. Tippetts; Dah-Jye Lee
Object recognition is a well studied but extremely challenging field. We present a novel approach to feature construction for object detection called Evolution-COnstructed Features (ECO features). Most current approaches rely on human experts to construct features for object recognition. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover multiple series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, no limitations to certain types of image sources, and ability to find both global and local feature types. We show in our experiments that the ECO features compete well against state-of-the-art object recognition algorithms.
Journal of Aerospace Computing Information and Communication | 2009
Beau J. Tippetts; Dah-Jye Lee; Spencer G. Fowers; James K. Archibald
Visionalgorithmswereimplementedonanfieldprogrammablegatearraytoprovideadditional information to supplement the insufficient data of a standard inertial measurement unit in order to create a previously unrealized completely onboard vision system for microunmanned aerial vehicles.The onboard vision system is composed of an field programmable gate array board, and a custom interface daughterboard which allow it to provide data regarding drifting movements of the micro-unmanned aerial vehicle not detected by inertial measurement units. The algorithms implemented for the vision system include a Harris feature detector, template matching feature correlator, similarity-constrained homography by random sample consensus, color segmentation, radial distortion correction, and an extended Kalman filter with a standard-deviation outlier rejection technique. This vision system was designed specifically for use as an onboard vision solution for determining movement of micro-unmanned aerial vehicles that have severe size, weight, and power limitations. Results show that the vision system is capable of real-time onboard image processing with sufficient accuracy to allow a micro-unmanned aerial vehicle to control itself without power or data tethers to a groundstation.
machine vision applications | 2012
Beau J. Tippetts; Dah-Jye Lee; James K. Archibald
This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications. For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision and motion analysis can be implemented to reconfigure the system for other real-time vision applications.
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Kirt D. Lillywhite; Dah-Jye Lee; Beau J. Tippetts; Spencer G. Fowers; Aaron W. Dennis; Brent E. Nelson; James K. Archibald
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Beau J. Tippetts; Kirt D. Lillywhite; Spencer G. Fowers; Aaron W. Dennis; Dah-Jye Lee; James K. Archibald
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for