Agnieszka C. Miguel
Seattle University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Agnieszka C. Miguel.
international conference on image processing | 1999
Agnieszka C. Miguel; Alexander E. Mohr; Eve A. Riskin
We present a simple and efficient scheme for using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm in a generalized multiple description framework. To combat packet loss, controlled amounts of redundancy are added to the original data during the compression process. Unequal loss protection is implemented by varying the amount of redundancy with the importance of data. The algorithm achieves graceful degradation of image quality in the presence of increasing description loss; high image quality is obtained even when over half of the descriptions are lost.
data compression conference | 2004
Agnieszka C. Miguel; Amanda R. Askew; Alexander Chang; Scott Hauck; Richard E. Ladner; Eve A. Riskin
This paper presents an algorithm for lossy compression of hyperspectral images for implementation on field programmable gate arrays (FPGA). To greatly reduce the bit rate required to code images, linear prediction is used between the bands to exploit the large amount of inter-band correlation. The prediction residual is compressed using the set partitioning in hierarchical trees algorithm. To reduce the complexity of the predictive encoder, this paper proposes a bit plane-synchronized closed loop predictor that does not require full decompression of a previous band at the encoder. The new technique achieves almost the same compression ratio as standard closed loop predictive coding and has a simpler on-board implementation.
Archive | 2006
Agnieszka C. Miguel; Richard E. Ladner; Eve A. Riskin; Scott Hauck; Dane K. Barney; Amanda R. Askew; Alexander Chang
Algorithms for lossless and lossy compression of hyperspectral images are presented. To greatly reduce the bit rate required to code images and to exploit the large amount of inter-band correlation, linear prediction between the bands is used. Each band, except the first one, is predicted by previously transmitted band. Once the prediction is formed, it is subtracted from the original ∗This work appeared in part in the Proceedings of the NASA Earth Science Technology Conference, 2003, and in the Proceedings of the Data Compression Conference, 2004. Research supported by NASA Contract NAS5-00213 and National Science Foundation grant number CCR-0104800. Scott Hauck was supported in part by an NSF CAREER Award and an Alfred P. Sloan Research Fellowship. Contact information: Professor Richard Ladner, University of Washington, Box 352500, Seattle, WA 981952500, (206) 543-9347, [email protected]. band, and the residual (difference image) is compressed. To find the best prediction algorithm, the impact of various band orderings and measures of prediction quality on the compression ratios is studied. The resulting lossless compression algorithm displays performance that is comparable with other recently published results. To reduce the complexity of the lossy predictive encoder, a bit plane-synchronized closed loop predictor that does not require full decompression of a previous band at the encoder is proposed. The new technique achieves similar compression ratios to that of standard closed loop predictive coding and has a simpler implementation.
international conference on image processing | 2016
Agnieszka C. Miguel; Sara Beery; Erica Flores; Loren Klemesrud; Rana Bayrakcismith
Camera trapping is used by conservation biologists to study snow leopards. In this research, we introduce techniques that find motion in camera trap images. Images are grouped into sets and a common background image is computed for each set. The background and superpixel-based features are then used to segment each image into objects that correspond to motion. The proposed methods are robust to changes in illumination due to time of day or the presence of camera flash.
IEEE Transactions on Sustainable Energy | 2012
Henry Louie; Agnieszka C. Miguel
Substantial quantities of wind plant data are being accumulated as interest and investment in renewable energy grows. These data sets can approach tens of terabytes in size, making their management, storage, manipulation, and transmission burdensome. Lossless compression of the data sets can mitigate these challenges without sacrificing accuracy. This paper develops and analyzes lossless compression algorithms that can be applied to data used in integration studies and data used in wind plant monitoring and operation. The algorithms exploit wind speed-to-wind power relationships, and the temporal and spatial correlations in the data. The Shannon entropy of wind power and speed data is computed to gain insight on the uncertainty of wind power and speed and to benchmark performance of the compression algorithms. The algorithms are applied to the National Renewable Energy Laboratorys Western and Eastern Data Sets and to actual wind turbine data. The resulting compression ratios are up to 50% higher than those obtained by direct application of off-the-shelf lossless compression methods.
data compression conference | 2000
Agnieszka C. Miguel; Eve A. Riskin
Summary form only given. We present a simple and efficient scheme for protecting a region of interest (ROI) in an image sent across generalized multiple description networks. To increase the probability that the ROI is received with high quality, we extend the unequal loss protection framework of MD-SPIHT by adding more redundancy to the ROI than to other parts of the image. The SPIHT algorithm orders data progressively and sends the globally important information first. By adding more redundancy to the earlier parts of the bit stream than to the later parts, MD-SPIHT protects data that are globally important to the image quality more than less important data. In this work, wavelet coefficients corresponding to the ROI are scaled by a large factor so that locally important information is sent in the earlier parts of the bit stream. As a result, the ROI is coded to a higher bit rate than the rest of the image and MD-SPIHT automatically assigns more redundancy to the localized ROI than to the background. Therefore, the ROI is heavily protected at the expense of lower protection in the background. The ROI has a higher probability of being received intact and its quality will be higher than the quality of the background. We show two cases in which 3 out of 8 descriptions are received for 35% redundancy assignment. Without ROI scaling, redundancy is spread to all parts of the image and the quality of the ROI is low. When the ROI is scaled, redundancy is concentrated in the ROI and the quality of the received ROI is higher.
international conference on image processing | 2006
Agnieszka C. Miguel; Jenny Liu; Dane K. Barney; Richard E. Ladner; Eve A. Riskin
Algorithms for near-lossless compression of hyperspectral images are presented. They guarantee that the intensity of any pixel in the decompressed image(s) differs from its original value by no more than a user-specified quantity To reduce the bit rate required to code images while providing significantly more compression than lossless algorithms, linear prediction between the bands is used. Each band is predicted by a previously transmitted band. The prediction is subtracted from the original band, and the residual is compressed with a bit plane coder which uses context-based adaptive binary arithmetic coding. To find the best prediction algorithm, the impact of various band orderings and optimization techniques on the compression ratios is studied.
data compression conference | 2006
Agnieszka C. Miguel; John F. Keane; Jeffrey R. Whiteaker; Heidi Zhang; Amanda G. Paulovich
The unrelenting growth of mass spectrometry (MS) based proteomic data to gigabytes per sample and terabytes per experiment motivates this investigation into compression methods suited to MS signal sources. The data for this study was derived from peptides of hand-mixed protein samples passed through a high performance liquid chromatography system (HPLC) and an electrospray ionization time-of-flight (ESI-TOF) mass spectrometer. Several lossless data compression methods were applied and yielded up to a 25:1 compression ratio relative to the original files containing base64 encoding of the data
Signal, Image and Video Processing | 2012
Agnieszka C. Miguel; Eve A. Riskin; Richard E. Ladner; Dane K. Barney
We investigate the ability to derive meaningful information from decompressed imaging spectrometer data. Hyperspectral images are compressed with near-lossless and lossy coding methods. Linear prediction between the bands is used in both cases. Each band is predicted by a previously transmitted band. The residual is formed by subtracting the prediction from the original data and then is compressed either with a near-lossless bit-plane coder or with the lossy JPEG2000 algorithm. We study the effects of these two types of compression on hyperspectral image processing such as mineral and vegetation content classification using whole- and mixed pixel analysis techniques. The results presented in this paper indicate that an efficient lossy coder outperforms near-lossless method in terms of its impact on final hyperspectral data applications.
computer-based medical systems | 2006
Agnieszka C. Miguel; John F. Keane; Jeffrey R. Whiteaker; Heidi Zhang; Amanda G. Paulovich
Summary form only given. The unrelenting growth of liquid chromatography-mass spectrometry (LC-MS) based proteomic data to gigabytes per sample and terabytes per experiment motivates this investigation into compression methods suited to MS signal sources. Compression is needed to facilitate storage, searching, archiving, retrieval, and communication of proteomic MS data. We demonstrate compression techniques that reduce the average file size by a factor of 25 without any loss of accuracy. We have designed two main methods to code the MS data. The first method predicts the mass-to-charge ratio based on the intensity values and encodes the residual with bzip2. The second algorithm maps the original intensity values onto a universal grid and either directly encodes them with bzip2 or applies an arithmetic coder to the results of run-length coding. The latter method achieves the highest compression ratios