Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brent M. Bradburn is active.

Publication


Featured researches published by Brent M. Bradburn.


Proceedings of SPIE | 2013

Binary image compression using conditional entropy-based dictionary design and indexing

Yandong Guo; Dejan Depalov; Peter Bauer; Brent M. Bradburn; Jan P. Allebach; Charles A. Bouman

The JBIG2 standard is widely used for binary document image compression primarily because it achieves much higher compression ratios than conventional facsimile encoding standards, such as T.4, T.6, and T.82 (JBIG1). A typical JBIG2 encoder works by first separating the document into connected components, or symbols. Next it creates a dictionary by encoding a subset of symbols from the image, and finally it encodes all the remaining symbols using the dictionary entries as a reference. In this paper, we propose a novel method for measuring the distance between symbols based on a conditionalentropy estimation (CEE) distance measure. The CEE distance measure is used to both index entries of the dictionary and construct the dictionary. The advantage of the CEE distance measure, as compared to conventional measures of symbol similarity, is that the CEE provides a much more accurate estimate of the number of bits required to encode a symbol. In experiments on a variety of documents, we demonstrate that the incorporation of the CEE distance measure results in approximately a 14% reduction in the overall bitrate of the JBIG2 encoded bitstream as compared to the best conventional dissimilarity measures.


international conference on image processing | 2013

Dynamic hierarchical dictionary design for multi-page binary document image compression

Yandong Guo; Dejan Depalov; Peter Bauer; Brent M. Bradburn; Jan P. Allebach; Charles A. Bouman

The JBIG2 standard is widely used for binary document image compression primarily because it achieves much higher compression ratios than conventional facsimile encoding standards. In this paper, we propose a dynamic hierarchical dictionary design method (DH) for multi-page binary document image compression with JBIG2. Our DH method outperforms other methods for multi-page compression by utilizing the information redundancy among pages with the following technologies. First, we build a hierarchical dictionary to keep more information per page for future usage. Second, we dynamically update the dictionary in memory to keep as much information as possible subject to the memory constraint. Third, we incorporate our conditional entropy estimation algorithm to utilize the saved information more effectively. Our experimental results show that the compression ratio improvement by our DH method is about 15% compared to the best existing multi-page encoding method.


international conference on image processing | 2013

Document image binarization via one-pass local classification

Haitao Xue; Charles A. Bouman; Peter Bauer; Dejan Depalov; Brent M. Bradburn; Jan P. Allebach

Binarization algorithms are used to create a binary representation of a raster document image, typically with the intent of identifying text and separating it from background content. In this paper, we propose a binarization algorithm via one-pass local classification. The algorithm first generates the initial binarization results by local thresholding, then corrects the results by a one-pass local classification strategy, followed by the process of component inversion. The experimental results demonstrate that our algorithm achieves a somewhat lower binarization error rate than the state-of-the-art algorithm COS [1], while requiring significantly less computation.


Proceedings of SPIE | 2012

A color quantization algorithm based on minimization of modified Lp norm error in a CIELAB space

Haitao Xue; Peter Bauer; Dejan Depalov; Brent M. Bradburn; Jan P. Allebach; Charles A. Bouman

Color quantization algorithms are used to select a small number of colors that can accurately represent the content of a particular image. In this research, we introduce a novel color quantization algorithm which is based on the minimization of a modified Lp norm rather than the more traditional L2 norm associated with mean square error (MSE). We demonstrate that the Lp optimization approach has two advantages. First, it distributes the colors more uniformly over the regions of the image; and second, the norms value can be used as an effective criterion for selecting the minimum number of colors necessary to achieve accurate representation of the image. One potential disadvantage of the modified Lp norm criteria is that it could increase the computation of the associated clustering methods. However, we solve this problem by introducing a two stage clustering procedure in which the first stage (pre-clustering) agglomerates the full set of pixels into a relatively large number of discrete colors; and the second stage (post-clustering) performs modified Lp norm minimization using the reduced number of discrete colors resulting from the pre-clustering step. The number of groups used in the post-clustering is then chosen to be the smallest number that achieves a selected threshold value of the normalized Lp norm. This two-stage clustering process dramatically reduces computation by merging together colors before the computationally expensive modified Lp norm minimization is applied.


electronic imaging | 2015

Flatbed scanner simulation to analyze the effect of detector's size on color artifacts

Mohammed Yousefhussien; Roger L. Easton; Raymond W. Ptucha; Mark Q. Shaw; Brent M. Bradburn; Jerry Wagner; David Larson; Eli Saber

Simulations of flatbed scanners can shorten the development cycle of new designs, estimate image quality, and lower manufacturing costs. In this paper, we present a flatbed scanner simulation a strobe RGB scanning method that investigates the effect of the sensor height on color artifacts. The image chain model from the remote sensing community was adapted and tailored to fit flatbed scanning applications. This model allows the user to study the relationship between various internal elements of the scanner and the final image quality. Modeled parameters include: sensor height, intensity and duration of illuminant, scanning rate, sensor aperture, detector modulation transfer function (MTF), and motion blur created by the movement of the sensor during the scanning process. These variables are also modeled mathematically by utilizing Fourier analysis, functions that model the physical components, convolutions, sampling theorems, and gamma corrections. Special targets were used to validate the simulation include single frequency pattern, a radial chirp-like pattern, or a high resolution scanned document. The simulation is demonstrated to model the scanning process effectively both on a theoretical and experimental level.


Archive | 1998

Adaptive image resolution enhancement technology

Qian Lin; Brent M. Bradburn; Brian E. Hoffmann


Archive | 1998

System for compression of digital images comprising low detail areas

Randall E. Grohs; Brent M. Bradburn


Archive | 1998

Apparatus and method for compressing Huffman encoded data

Randall E. Grohs; Brent M. Bradburn


Archive | 1998

IMAGE COMPRESSION OF BACKGROUND AND TEXT TILES

Randall E. Grohs; Brent M. Bradburn


Archive | 2002

Pixel processing system for image production

Brent M. Bradburn

Collaboration


Dive into the Brent M. Bradburn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge