Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joan L. Mitchell is active.

Publication


Featured researches published by Joan L. Mitchell.


Ibm Journal of Research and Development | 1988

An overview of the basic principles of the Q-Coder adaptive binary arithmetic coder

William B. Pennebaker; Joan L. Mitchell; Glen G. Langdon; Ronald B. Arps

The Q-Coder is a new form of adaptive binary arithmetic coding. The binary arithmetic coding part of the technique is derived from the basic concepts introduced by Rissanen, Pasco, and Langdon, but extends the coding conventions to resolve a conflict between optimal software and hardware implementations. In addition, a robust form of probability estimation is used in which the probability estimate is derived solely from the interval renormalizations that are part of the arithmetic coding process. A brief tutorial of arithmetic coding concepts is presented, followed by a discussion of the compatible optimal hardware and software coding structures and the estimation of symbol probabilities from interval renormalization.


Ibm Journal of Research and Development | 1988

Probability estimation for the Q-Coder

William B. Pennebaker; Joan L. Mitchell

The Q-Coder is an important new development in binary arithmetic coding. It combines a simple but efficient arithmetic approximation for the multiply operation, a new formalism which yields optimally efficient hardware and software implementations, and a new technique for estimating symbol probabilities which matches the performance of any method known. This paper describes the probability-estimation technique. The probability changes are estimated solely from renormalizations in the coding process and require no additional counters. The estimation process can be implemented as a finite-state machine, and is simple enough to allow precise theoretical modeling of single-context coding. Approximate models have been developed for a more complex multi-rate version of the estimator and for mixed-context coding. Experimental studies verifying the modeling and showing the performance achieved for a variety of image-coding models are presented.


Ibm Journal of Research and Development | 1988

Software implementation of the Q-Coder

Joan L. Mitchell; William B. Pennebaker

The Q-Coder is an important new development in arithmetic coding. It combines a simple but efficient arithmetic approximation for the multiply operation, a new formalism which yields optimally efficient hardware and software implementations, and a new technique for estimating symbol probabilities which matches the performance of any method known. This paper describes implementations of the Q-Coder following both the hardware and software paths. Detailed flowcharts are given.


Ibm Journal of Research and Development | 1988

Optimal hardware and software arithmetic coding procedures for the Q-Coder

Joan L. Mitchell; William B. Pennebaker

The Q-Coder is an important new development in arithmetic coding. It combines a simple but efficient arithmetic approximation for the multiply operation, a new formalism which yields optimally efficient hardware and software implementations, and a new form of probability estimation. This paper describes the concepts which allow different, yet compatible, optimal software and hardware implementations. In prior binary arithmetic coding algorithms, efficient hardware implementations favored ordering the more probable symbol (MPS) above the less probable symbol (LPS) in the current probability interval. Efficient software implementation required the inverse ordering convention. In this paper it is shown that optimal hardware and software encoders and decoders can be achieved with either symbol ordering. Although optimal implementation for a given symbol ordering requires the hardware and software code strings to point to opposite ends of the probability interval, either code string can be converted to match the other exactly. In addition, a code string generated using one symbol-ordering convention can be inverted so that it exactly matches the code string generated with the inverse convention. Even where bit stuffing is used to block carry propagation, the code strings can be kept identical.


Ibm Journal of Research and Development | 1998

The Qx-coder

Michael J. Slattery; Joan L. Mitchell

The IBM Adaptive Bilevel Image Compression (ABIC) algorithm depends upon the hardware-optimized Q-coder. The Joint Bi-level Image Experts Group (JBIG) settled upon a software-optimized QM-coder. This paper explores the incompatibilities of the hardware-and software-optimized binary arithmetic coding conventions and reports on the solution that allowed a merged Qx-coder in hardware. A unique hardware solution is presented for the termination of the JBIG data stream (CLEARBITS). The probability estimation is presented in a common format. Detailed flowcharts are included in the Appendix. An ASIC core is available that supports both the ABIC and JBIG bilevel data compression standards using this merged Qx-coder.


IEEE Transactions on Communications | 1986

Gray-Scale Image Coding for Freeze-Frame Videoconferencing

Dimitris Anastassiou; William B. Pennebaker; Joan L. Mitchell

A new gray-scale image coding technique has been developed, in which an extended DPCM approach has been combined with entropy coding. This technique has been implemented in a freeze-frame videoconferencing system which is now operational at IBM sites throughout the world. Following image preprocessing, the two fields of the interlaced 512 x 480 pixel video frame are compressed sequentially with different algorithms. The reconstructed image quality is improved by subsequent image postprocessing, the final reconstructed image being almost indistinguishable from the original image. Typical gray-scale video images compress to about a half bit per pixel and transmit over 4.8 kbit/s dial-up telephone lines in about a half minute. The gray-scale image processing and compression algorithms are described in this paper.


Ibm Systems Journal | 1983

Series/1-based videoconferencing system

Dimitris Anastassiou; Marvin K. Brown; Hugh C. Jones; Joan L. Mitchell; William B. Pennebaker; Keith S. Pennington

Discussed is a new videoconferencing system that has been developed and deployed at several IBM locations. This system transmits high-quality monochrome, freeze-frame images over dial-up telephone lines between two (or three) dedicated videoconferencing rooms. There are two main system components. An IBM Series/1 provides control, communication, data compression, and storage, and a Grinnell GMR-270 image processing display system implements image acquisition, processing, and video buffering functions. Conference participants may choose either a basically black and white rendering of an image for fast transmission or a continuous-tone rendering with a longer transmission time. Details are given regarding the system configuration, function, and operation.


national computer conference | 1980

Facsimile image coding

Joan L. Mitchell

Facsimile image coding has recently received a lot of attention because of the standardization work being done in this area by the International Telegraph and Telephone Consultative Committee (CCITT). In November, 1977, CCITT Study Group XIV standardized a one-dimensional data compression scheme for facsimile images. Two years later this standard was incorporated in a recommended two-dimensional coding scheme.


IEEE Transactions on Communications | 1989

Graphics image coding for freeze-frame videoconferencing

Joan L. Mitchell; William B. Pennebaker; Dimitris Anastassiou; Keith S. Pennington

A technique is presented for coding images that are bilevel in nature but have been captured in continuous-tone format. Following various stages of image processing, a three-level image is generated, and compressed to about 0.1 to 0.2 b/pixel. The technique has been implemented in the IBM freeze-frame videoconferencing system. >


international parallel and distributed processing symposium | 2005

Enhanced parallel processing in wide registers

Joan L. Mitchell; Arianne T. Hinds

Wide computer registers offer opportunities to exploit parallel processing. Instead of using hardware assists to partition a register into independent non-interacting fields, the multiple data elements can borrow and carry from elements to the left, and yet be accurately separated. Algorithms can be designed so that they execute within the allocated precision. Their floating point or irrational constants (e.g., cosines) are converted into integer numerators with floating point denominators. The denominators are then merged into scaling terms. To control the dynamic range and thus require less bits of precision per element, shift rights can be used. The effect of the average truncation errors is analyzed and a technique shown to minimize this average error.

Researchain Logo
Decentralizing Knowledge