Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott E. Budge is active.

Publication


Featured researches published by Scott E. Budge.


international conference on acoustics, speech, and signal processing | 1985

Compression of color digital images using vector quantization in product codes

Scott E. Budge; Richard L. Baker

There is a growing interest in the use of vector quantization for coding digital images. A key issue to be resolved is how to achieve perceptually pleasing results while limiting encoding complexity to tolerable levels. In this paper, product codes are described which improve the quality of the encoded edges and textures for a given level of complexity. These product codes separate the mean and orientation information from each source vector and encode this information independently to allow the residual to be vector quantized more accurately. The color image coder also reduces the required bit rate by taking advantage of spectral redundancy. Experimental results indicate that an improvement of almost 1.4 dB in SNR can be achieved over a Discrete Cosine Transform block coder of comparable complexity, with negligible computational complexity added by the product structure.


asilomar conference on signals, systems and computers | 2007

A Handheld Texel Camera for Acquiring Near-Instantaneous 3D Images

Brandon Boldt; Scott E. Budge; Robert T. Pack; Paul Israelsen

A Texel camera is a device which synchronously captures depth information via a ladar and digital imagery of the same scene. The ladar and digital camera are co-boresighted to eliminate parallax. This configuration fuses the ladar data to the digital image at the pixel level, eliminating complex post-processing to register the datasets. This paper describes a handheld version of a Texel Camera which can be used to create near-instantaneous 3D imagery. The hardware configuration of the Texel Camera, issues and method associated with ladar/camera calibration, and representative imagery are presented.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Simulation and modeling of return waveforms from a ladar beam footprint in USU LadarSIM

Scott E. Budge; Brad Leishman; Robert T. Pack

Ladar systems are an emerging technology with applications in many fields. Consequently, simulations for these systems have become a valuable tool in the improvement of existing systems and the development of new ones. This paper discusses the theory and issues involved in reliably modeling the return waveform of a ladar beam footprint in the Utah State University LadarSIM simulation software. Emphasis is placed on modeling system-level effects that allow an investigation of engineering tradeoffs in preliminary designs, and validation of behaviors in fabricated designs. Efforts have been made to decrease the necessary computation time while still maintaining a usable model. A full waveform simulation is implemented that models optical signals received on detector followed by electronic signals and discriminators commonly encountered in contemporary direct-detection ladar systems. Waveforms are modeled using a novel hexagonal sampling process applied across the ladar beam footprint. Each sample is weighted using a Gaussian spatial profile for a well formed laser footprint. Model fidelity is also improved by using a bidirectional reflectance distribution function (BRDF) for target reflectance. Once photons are converted to electrons, waveform processing is used to detect first, last or multiple return pulses. The detection methods discussed in this paper are a threshold detection method, a constant fraction method, and a derivative zero-crossing method. Various detection phenomena, such as range error, walk error, drop outs and false alarms, can be studied using these detection methods.


Optical Engineering | 2013

Calibration method for texel images created from fused flash lidar and digital camera images

Scott E. Budge; Neeraj S. Badamikar

Abstract. The fusion of imaging lidar information and digital imagery results in 2.5-dimensional surfaces covered with texture information, called texel images. These data sets, when taken from different viewpoints, can be combined to create three-dimensional (3-D) images of buildings, vehicles, or other objects. This paper presents a procedure for calibration, error correction, and fusing of flash lidar and digital camera information from a single sensor configuration to create accurate texel images. A brief description of a prototype sensor is given, along with a calibration technique used with the sensor, which is applicable to other flash lidar/digital image sensor systems. The method combines systematic error correction of the flash lidar data, correction for lens distortion of the digital camera and flash lidar images, and fusion of the lidar to the camera data in a single process. The result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements from the proposed algorithm, are presented. Results with the prototype sensor show very good match between 3-D points and the digital image (<2.8 image pixels), with a 3-D object measurement error of <0.5%, compared to a noncalibrated error of ∼3%.


IEEE Transactions on Systems, Man, and Cybernetics | 1994

Classification using set-valued Kalman filtering and Levi's decision theory

Todd K. Moon; Scott E. Budge

We consider the problem of using Levis expected epistemic decision theory for classification when the hypotheses are of different informational values, conditioned on convex sets obtained from a set-valued Kalman filter. The background of epistemic utility decision theory with convex probabilities is outlined and a brief introduction to set-valued estimation is given. The decision theory is applied to a classifier in a multiple-target tracking scenario. A new probability density, appropriate for classification using the ratio of intensities, is introduced. >


Optical Engineering | 2014

Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

Scott E. Budge; Neeraj S. Badamikar; Xuan Xie

Abstract. Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.


international conference on acoustics speech and signal processing | 1999

Locally optimal, buffer-constrained motion estimation and mode selection for video sequences

Christian B. Peel; Scott E. Budge; Kyminh Liang; Chien-Min Huang

We describe a method of using a Lagrange multiplier to make a locally optimal trade off between rate and distortion in the motion search for video sequences, while maintaining a constant bit rate channel. Simulation of this method shows that it gives up to 3.5 dB PSNR improvement in a high motion sequence. A locally rate-distortion (R-D) optimal mode selection mechanism is also described. This method also gives significant quality benefit over the nominal method. Though the benefit of these techniques is significant when used separately, when the optimal mode selection is combined with the R-D optimal motion search, it does not perform much better than the codec does with only the R-D optimal motion search.


Proceedings of SPIE | 2013

Automatic Registration of Multiple Texel Images (Fused Lidar/Digital Imagery) for 3D Image Creation

Scott E. Budge; Neeraj S. Badamikar

Creation of 3D images through remote sensing is a topic of interest in many applications such as terrain / building modeling and automatic target recognition (ATR). Several photogrammetry-based methods have been proposed that derive 3D information from digital images from different perspectives, and lidar- based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registra tion alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and lack of proper convergence in the merging process. This paper presents a method to create 3D images that uses the unique properties of texel images (pixel fused lidar and digital imagery) to improve the quality and robustness of fused 3D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3D points are fused at the sensor level, more accurate 3D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods.


international conference on digital signal processing | 2009

A Laboratory-Based Course in Real-Time Digital Signal Processing Implementation

Scott E. Budge

A four credit-hour laboratory course in real-time digital signal processing (DSP) implementation is described. This course has been developed and taught at Utah State University for over 12 years, incorporating feedback from students and extensive classroom experience. The educational goal of the course is to teach seniors/first year graduate students how to select, implement, and evaluate DSP systems for real-time signal processing. A major component of the class is a series of seven laboratories in which the student must perform realtime processing on hardware based on modern digital signal processors and FPGAs. Student feedback has indicated that the course has been very successful in helping them become effective in real-time DSP-related jobs.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

The simulation of automatic ladar sensor control during flight operations using USU LadarSIM software

Robert T. Pack; David Saunders; R. Rees Fullmer; Scott E. Budge

USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor that automatically generates scan commands during flight simulations. The scan generation depends on specified flight trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2) various scene and trajectories associated with particular maneuvers or missions.

Collaboration


Dive into the Scott E. Budge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge