Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael W. Marcellin is active.

Publication


Featured researches published by Michael W. Marcellin.


Journal of Electronic Imaging | 2013

JPEG2000 Image Compression Fundamentals, Standards and Practice

David Taubman; Michael W. Marcellin

This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.


IEEE Transactions on Communications | 1990

Trellis coded quantization of memoryless and Gauss-Markov sources

Michael W. Marcellin; Thomas R. Fischer

Trellis-coded quantization (TCQ) is developed and applied to the encoding of memoryless and Gauss-Markov sources. The theoretical justification for the approach is alphabet-constrained rate distortion theory, which is a dual to the channel capacity argument that motivates trellis-coded modulation (TCM). The authors adopt the notions of signal set expansion, set partitioning, and branch labeling of TCM, but modify the techniques to account for the source distribution, to design TCQ coders of low complexity with excellent mean-squared-error (MSE) performance. For a memoryless uniform source, TCQ provides an MSE within 0.21 dB of the distortion-rate bound at all positive (integral) rates. The performance is superior to that promised by the coefficient of quantization for all of the best lattices known in dimensions 24 or less. For a memoryless Gaussian source, the TCQ performance at rates of 0.5, 1, and 2 b/sample is superior to all previous results the authors found in the literature. The encoding complexity of TCQ is very modest. TCQ is incorporated into a predictive coding structure for the encoding of Gauss-Markov sources. Simulation results for first-, second-, and third-order Gauss-Markov sources are presented. >


data compression conference | 2000

An overview of JPEG-2000

Michael W. Marcellin; Michael J. Gormish; Ali Bilgin; Martin Boliek

JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.


Proceedings of the IEEE | 2002

JPEG2000: standard for interactive imaging

David Taubman; Michael W. Marcellin

JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet.


IEEE Transactions on Geoscience and Remote Sensing | 1995

Compression of hyperspectral imagery using the 3-D DCT and hybrid DPCM/DCT

Glen P. Abousleman; Michael W. Marcellin; Bobby R. Hunt

Two systems are presented for compression of hyperspectral imagery which utilize trellis coded quantization (TCQ). Specifically, the first system uses TCQ to encode transform coefficients resulting from the application of an 8/spl times/8/spl times/8 discrete cosine transform (DCT). The second systems uses DPCM to spectrally decorrelate the data, while a 2D DCT coding scheme is used for spatial decorrelation. Side information and rate allocation strategies are discussed. Entropy-constrained code-books are designed using a modified version of the generalized Lloyd algorithm. These entropy constrained systems achieve compression ratios of greater than 70:1 with average PSNRs of the coded hyperspectral sequences exceeding 40.0 dB. >


IEEE Network | 1997

A survey of MAC protocols proposed for wireless ATM

Jaime Gallegos Sánchez; Ralph Martinez; Michael W. Marcellin

Wireless ATM (W-ATM) networks have been studied extensively. The extension of ATM network services to the wireless environment faces many interesting problems. The original ATM network was designed for high-speed noiseless, reliable channels. None of these characteristics are applicable to the wireless channel. One of the most critical aspects of a W-ATM network is the medium access control (MAC) protocol used by mobile terminals (MTs) to request service from the base station (BS), which has to consider the quality of service (QoS) of the specific application. In this article the authors analyze some MAC protocols, particularly those for TDMA systems, and discuss their advantages and disadvantages.


IEEE Transactions on Consumer Electronics | 2003

Compression of electrocardiogram signals using JPEG2000

Ali Bilgin; Michael W. Marcellin; Maria I. Altbach

JPEG2000 is the latest international standard for compression of still images. Although the JPEG2000 codec is designed to compress images, we illustrate that it can also be used to compress other signals. As an example, we illustrate how the JPEG2000 codec can be used to compress electrocardiogram (ECG) data. Experiments using the MIT-BIH arrhythmia database illustrate that the proposed approach outperforms many existing ECG compression schemes. The proposed scheme allows the use of existing hardware and software JPEG2000 codecs for ECG compression, and can be especially useful in eliminating the need for specialized hardware development. The desirable characteristics of the JPEG2000 codec, such as precise rate control and progressive quality, are retained in the presented scheme. The goal of this paper is to demonstrate the ECG application as an example. This example can be extended to other signals that exist within the consumer electronics realm.


IEEE Transactions on Information Theory | 1991

Trellis-coded vector quantization

Thomas R. Fischer; Michael W. Marcellin; Min Wang

Trellis-coded quantization is generalized to allow a vector reproduction alphabet. Three encoding structures are described, several encoder design rules are presented, and two design algorithms are developed. It is shown that for a stationary ergodic vector source, if the optimized trellis-coded vector quantization reproduction process is jointly stationary and ergodic with the source, then the quantization noise is zero-mean and of a variance equal to the difference between the source variance and the variance of the reproduction sequence. Several examples illustrate the encoder design procedure and performance. >


international symposium on information theory | 1994

On entropy-constrained trellis coded quantization

Michael W. Marcellin

An entropy-constrained trellis coded quantization (TCQ) scheme is presented for encoding memoryless sources. A simple 8-state trellis is used to encode the memoryless Gaussian source with mean-squared-error (MSE) performance within about 0.5 dB of the rate-distortion function. This performance is achieved at all non-negative encoding rates. >


IEEE Transactions on Information Theory | 2012

On the Construction of Structured LDPC Codes Free of Small Trapping Sets

Dung Viet Nguyen; Shashi Kiran Chilappagari; Michael W. Marcellin; Bane Vasic

We present a method to construct low-density parity-check (LDPC) codes with low error floors on the binary symmetric channel. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the trapping set ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four cycles.

Collaboration


Dive into the Michael W. Marcellin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joan Serra-Sagristà

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesc Auli-Llinas

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

David Taubman

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Blanes

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge