Spencer G. Fowers
Brigham Young University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Spencer G. Fowers.
computational intelligence in robotics and automation | 2007
Spencer G. Fowers; Dah-Jye Lee; Beau J. Tippetts; Kirt D. Lillywhite; Aaron W. Dennis; James K. Archibald
Micro Unmanned Air Vehicles are well suited for a wide variety of applications in agriculture, homeland security, military, search and rescue, and surveillance. In response to these opportunities, a quad-rotor micro UAV has been developed at the Robotic Vision Lab at Brigham Young University. The quad-rotor UAV uses a custom, low-power FPGA platform to perform computationally intensive vision processing tasks on board the vehicle, eliminating the need for wireless tethers and computational support on ground stations. Drift stabilization of the UAV has been implemented using Harris feature detection and template matching running in real-time in hardware on the on-board FPGA platform, allowing the quad-rotor to maintain a stable and almost drift-free hover without human intervention.
international symposium on visual computing | 2007
Beau J. Tippetts; Spencer G. Fowers; Kirt D. Lillywhite; Dah-Jye Lee; James K. Archibald
An efficient algorithm to detect, correlate, and track features in a scene was implemented on an FPGA in order to obtain real-time performance. The algorithm implemented was a Harris Feature Detector combined with a correlator based on a priority queue of feature strengths that considered minimum distances between features. The remaining processing of frame to frame movement is completed in software to determine an affine homography including translation, rotation, and scaling. A RANSAC method is used to remove mismatched features and increase accuracy. This implementation was designed specifically for use as an onboard vision solution in determining movement of small unmanned air vehicles that have size, weight, and power limitations.
Journal of Aerospace Computing Information and Communication | 2009
Beau J. Tippetts; Dah-Jye Lee; Spencer G. Fowers; James K. Archibald
Visionalgorithmswereimplementedonanfieldprogrammablegatearraytoprovideadditional information to supplement the insufficient data of a standard inertial measurement unit in order to create a previously unrealized completely onboard vision system for microunmanned aerial vehicles.The onboard vision system is composed of an field programmable gate array board, and a custom interface daughterboard which allow it to provide data regarding drifting movements of the micro-unmanned aerial vehicle not detected by inertial measurement units. The algorithms implemented for the vision system include a Harris feature detector, template matching feature correlator, similarity-constrained homography by random sample consensus, color segmentation, radial distortion correction, and an extended Kalman filter with a standard-deviation outlier rejection technique. This vision system was designed specifically for use as an onboard vision solution for determining movement of micro-unmanned aerial vehicles that have severe size, weight, and power limitations. Results show that the vision system is capable of real-time onboard image processing with sufficient accuracy to allow a micro-unmanned aerial vehicle to control itself without power or data tethers to a groundstation.
IEEE Transactions on Circuits and Systems for Video Technology | 2013
Spencer G. Fowers; Dah-Jye Lee; Dan Ventura; James K. Archibald
This paper presents a feature descriptor well suited for limited-resource applications such as an unmanned aerial vehicle embedded systems, small microprocessors, and small low-power field programmable gate array (FPGA) fabric. The basis sparse-coding inspired similarity (BASIS) descriptor utilizes sparse coding to create dictionary images that model the regions in the human visual cortex. Due to the reduced amount of computation required for computing BASIS descriptors, reduced descriptor size, and the ability to create the descriptors without the use of a floating point, this approach is an excellent candidate for FPGA hardware implementation. The bit-level-accurate BASIS descriptor was tested on a dataset of real aerial images with the task of calculating a frame-to-frame homography and compared to software versions of scale-invariant feature transform (SIFT) and speeded-up robust features (SURF). Experimental results show that the BASIS descriptor outperforms SIFT and performs comparably to SURF on frame-to-frame aerial feature point matching. BASIS descriptors require less memory storage than other descriptors and can be computed entirely in hardware, allowing the descriptor to operate at real-time frame rates on a low-power embedded platform such as an FPGA.
Journal of Aerospace Information Systems | 2014
Spencer G. Fowers; Alok Desai; Dah-Jye Lee; Dan Ventura; Doran Wilde
This paper presents the development of a new feature descriptor derived from previous work on the basis sparse-coding inspired similarity descriptor that provides smaller descriptor size, simpler computations, faster matching speed, and higher accuracy. The TreeBASIS descriptor algorithm uses a binary vocabulary tree that is computed offline using basis sparse-coding inspired similarity dictionary images derived from sparse coding and a test set of feature region images. The resulting tree is stored in memory for online high-speed searching for feature matching. During the online matching stage, a feature region image is binary quantized and the resulting quantized vector is passed into the basis sparse-coding inspired similarity tree. A Hamming distance is computed between the feature region images and the effectively descriptive basis sparse-coding inspired similarity dictionary images at the current node to determine the branch taken. The path the feature region image takes is saved as the descriptor, ...
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Kirt D. Lillywhite; Dah-Jye Lee; Beau J. Tippetts; Spencer G. Fowers; Aaron W. Dennis; Brent E. Nelson; James K. Archibald
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Beau J. Tippetts; Kirt D. Lillywhite; Spencer G. Fowers; Aaron W. Dennis; Dah-Jye Lee; James K. Archibald
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for
Journal of Aerospace Information Systems | 2013
Spencer G. Fowers; Dah-Jye Lee; Dan Ventura; Beau J. Tippetts
28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last years design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
International Scholarly Research Notices | 2012
Spencer G. Fowers; Dah-Jye Lee
Feature point matching is an important step for many vision-based unmanned-aerial-vehicle applications. This paper presents the development of a new feature descriptor for feature point matching that is well suited for micro unmanned aerial vehicles equipped with a low-resource, compact, lightweight, low-power embedded vision sensor. The Basis Sparse-Coding Inspired Similarity descriptor uses theory taken from sparse coding to provide an efficient image feature description method for frame-to-frame feature point matching. This descriptor requires simple mathematical operations, uses comparatively small memory storage, and can support color and grayscale feature descriptions. It is an excellent candidate for implementation on low-resource systems that require real-time performance, where complex mathematical operations are prohibitively expensive. To demonstrate its performance, the feature matching result was used to calculate a frame-to-frame homography that is essential to unmanned-aerial-vehicle applic...
International Journal of Reconfigurable Computing | 2014
Spencer G. Fowers; Alok Desai; Dah-Jye Lee; Dan Ventura; James K. Archibald
The important task of library book inventory, or shelf-reading, requires humans to remove each book from a library shelf, open the front cover, scan a barcode, and reshelve the book. It is a labor-intensive and often error-prone process. Technologies such as 2D barcode scanning or radio frequency identification (RFID) tags have recently been proposed to improve this process. They both incur significant upfront costs and require a large investment of time to fit books with special tags before the system can be productive. A vision-based automation system is proposed to improve this process without those prohibitively high upfront costs. This low-cost shelf-reading system uses a hand-held imaging device such as a smartphone to capture book spine images and a server that processes feature descriptors in these images for book identification. Existing color feature descriptors for feature matching typically use grayscale feature detectors, which omit important color edges. Also, photometric-invariant color feature descriptors require unnecessary computations to provide color descriptor information. This paper presents the development of a simple color enhancement feature descriptor called Color Difference-of-Gaussians SIFT (CDSIFT). CDSIFT is well suited for library inventory process automation, and this paper introduces such a system for this unique application.