Kirt D. Lillywhite
Brigham Young University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kirt D. Lillywhite.
Journal of Real-time Image Processing | 2016
Beau J. Tippetts; Dah-Jye Lee; Kirt D. Lillywhite; James K. Archibald
A significant amount of research in the field of stereo vision has been published in the past decade. Considerable progress has been made in improving accuracy of results as well as achieving real-time performance in obtaining those results. This work provides a comprehensive review of stereo vision algorithms with specific emphasis on real-time performance to identify those suitable for resource-limited systems. An attempt has been made to compile and present accuracy and runtime performance data for all stereo vision algorithms developed in the past decade. Algorithms are grouped into three categories: (1) those that have published results of real-time or near real-time performance on standard processors, (2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and (3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource-limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.
computational intelligence in robotics and automation | 2007
Spencer G. Fowers; Dah-Jye Lee; Beau J. Tippetts; Kirt D. Lillywhite; Aaron W. Dennis; James K. Archibald
Micro Unmanned Air Vehicles are well suited for a wide variety of applications in agriculture, homeland security, military, search and rescue, and surveillance. In response to these opportunities, a quad-rotor micro UAV has been developed at the Robotic Vision Lab at Brigham Young University. The quad-rotor UAV uses a custom, low-power FPGA platform to perform computationally intensive vision processing tasks on board the vehicle, eliminating the need for wireless tethers and computational support on ground stations. Drift stabilization of the UAV has been implemented using Harris feature detection and template matching running in real-time in hardware on the on-board FPGA platform, allowing the quad-rotor to maintain a stable and almost drift-free hover without human intervention.
Pattern Recognition | 2013
Kirt D. Lillywhite; Dah-Jye Lee; Beau J. Tippetts; James K. Archibald
This paper presents a novel approach for object detection using a feature construction method called Evolution-COnstructed (ECO) features. Most other object recognition approaches rely on human experts to construct features. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, and no limitations to certain types of image sources. We show in our experiments that ECO features perform better or comparable with hand-crafted state-of-the-art object recognition algorithms. An analysis is given of ECO features which includes a visualization of ECO features and improvements made to the algorithm.
IEEE Transactions on Circuits and Systems for Video Technology | 2011
Beau J. Tippetts; Dah-Jye Lee; James K. Archibald; Kirt D. Lillywhite
It is evident that the accuracy of stereo vision algorithms has continued to increase based on commonly used quantitative evaluations of the resulting disparity maps. This paper focuses on the development of promising stereo vision algorithms that efficiently tradeoff accuracy for large reductions in required computational resources. An intensity profile shape-matching algorithm is introduced as an example of an algorithm that makes such tradeoffs. The proposed algorithm is compared to both a basic sum-of-absolute-differences (SAD) block-matching algorithm, as well as a stereo vision algorithm that is highly ranked for its accuracy based on the Middlebury evaluation criteria. This comparison shows that the proposed algorithms accuracy on the commonly used Tsukuba stereo image pair is lower than many published stereo vision algorithms, but that for unrectified stereo image pairs that have even the slightest differences in brightness, it is potentially more robust than algorithms that rely on SAD block matching. An example application that requires 3-D information is implemented to show that the accuracy of the proposed algorithm is sufficient for this use. Timing results show that this is a very fast dense-disparity stereo vision algorithm when compared to other algorithms capable of running on a standard microprocessor.
international symposium on visual computing | 2007
Beau J. Tippetts; Spencer G. Fowers; Kirt D. Lillywhite; Dah-Jye Lee; James K. Archibald
An efficient algorithm to detect, correlate, and track features in a scene was implemented on an FPGA in order to obtain real-time performance. The algorithm implemented was a Harris Feature Detector combined with a correlator based on a priority queue of feature strengths that considered minimum distances between features. The remaining processing of frame to frame movement is completed in software to determine an affine homography including translation, rotation, and scaling. A RANSAC method is used to remove mismatched features and increase accuracy. This implementation was designed specifically for use as an onboard vision solution in determining movement of small unmanned air vehicles that have size, weight, and power limitations.
Pattern Recognition | 2012
Kirt D. Lillywhite; Beau J. Tippetts; Dah-Jye Lee
Object recognition is a well studied but extremely challenging field. We present a novel approach to feature construction for object detection called Evolution-COnstructed Features (ECO features). Most current approaches rely on human experts to construct features for object recognition. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover multiple series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, no limitations to certain types of image sources, and ability to find both global and local feature types. We show in our experiments that the ECO features compete well against state-of-the-art object recognition algorithms.
workshop on applications of computer vision | 2009
Kirt D. Lillywhite; Dah-Jye Lee; Dong Zhang
Human detection has always been an important part of computer vision but many implementations lack the real-time performance that real world applications require. This paper presents a real-time implementation of human detection in video using the state-of-the-art histograms of oriented gradients method. Each image in the video sequence is tested at multiple scales using a sliding window. Histograms of oriented gradients are created for each window and passed to a support vector machine to classify it as human or not. The histograms of oriented gradients method is implemented on a GPU using the NVIDIA CUDA architecture. The implementation significantly speeds up computation, achieving approximately 38 frames a second on VGA video while testing 11,160 windows per frame. Accuracy remains comparable to the CPU implementation. The flexibility and computational power the GPU affords users is discussed. These discussions should benefit those researchers who are interested in using a GPU for high-performance computing tasks.
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Kirt D. Lillywhite; Dah-Jye Lee; Beau J. Tippetts; Spencer G. Fowers; Aaron W. Dennis; Brent E. Nelson; James K. Archibald
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006
Beau J. Tippetts; Kirt D. Lillywhite; Spencer G. Fowers; Aaron W. Dennis; Dah-Jye Lee; James K. Archibald
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for
International Journal of Reconfigurable Computing | 2014
Beau J. Tippetts; Dah-Jye Lee; Kirt D. Lillywhite; James K. Archibald
28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last years design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.