Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir Said is active.

Publication


Featured researches published by Amir Said.


IEEE Transactions on Circuits and Systems for Video Technology | 1996

A new, fast, and efficient image codec based on set partitioning in hierarchical trees

Amir Said; William A. Pearlman

Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.


IEEE Transactions on Image Processing | 1996

An image multiresolution representation for lossless and lossy compression

Amir Said; William A. Pearlman

We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

Efficient, low-complexity image coding with a set-partitioning embedded block coder

William A. Pearlman; Asad Islam; Nithin Nagaraj; Amir Said

We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. It uses a recursive set-partitioning procedure to sort subsets of wavelet coefficients by maximum magnitude with respect to thresholds that are integer powers of two. It exploits two fundamental characteristics of an image transform-the well-defined hierarchical structure, and energy clustering in frequency and in space. The two partition strategies allow for versatile and efficient coding of several image transform structures, including dyadic, blocks inside subbands, wavelet packets, and discrete cosine transform (DCT). We describe the use of this coding algorithm in several implementations, including reversible (lossless) coding and its adaptation for color images, and show extensive comparisons with other state-of-the-art coders, such as set partitioning in hierarchical trees (SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to them in compression efficiency.


international symposium on circuits and systems | 1993

Image compression using the spatial-orientation tree

Amir Said; William A. Pearlman

The zero-tree method for image compression, proposed by J. Shapiro (1992), is studied. The method is presented in a more general perspective, so that its characteristics can be better understood. From this analysis, an improved method is proposed, and it is shown that the new method can increase the PSNR up to 1.3 dB over the original method.<<ETX>>


visual communications and image processing | 1993

Reversible image compression via multiresolution representation and predictive coding

Amir Said; William A. Pearlman

In this paper a new image transformation suited for reversible (lossless) image compression is presented. It uses a simple pyramid multiresolution scheme which is enhanced via predictive coding. The new transformation is similar to the subband decomposition, but it uses only integer operations. The number of bits required to represent the transformed image is kept small through careful scaling and truncations. The lossless coding compression rates are smaller than those obtained with predictive coding of equivalent complexity. It is also shown that the new transform can be effectively used, with the same coding algorithm, for both lossless and lossy compression. When used for lossy compression, its rate-distortion function is comparable to other efficient lossy compression methods.


international conference on image processing | 2005

Measuring the strength of partial encryption schemes

Amir Said

Partial encryption (PE) of compressed multimedia can greatly reduce the computational complexity by encrypting only a fraction of the data bits. It can also easily provide users with low-quality versions, while maintaining the high-quality version inaccessible to unauthorized users. However, it is necessary to realistically evaluate its security strength. Some of the cryptanalysis done for these techniques ignored important characteristics of the multimedia files, and used overly optimistic assumptions. We demonstrate potential weaknesses of such techniques studying attacks that exploit the information provided by non-encrypted bits, and the availability of side information (e.g., from analog signals). We show that a more useful measure of encryption strength is the complexity to reduce distortion, instead of recovering the encryption key. We consider attacks on PE that avoid error propagation (standard-compliant PE), and PE that try to exploit that property for security. In both cases we show that attacks that require complexity much lower than exhaustive enumeration of encrypted/key bits can successfully yield good quality content. Experimental results are shown for images, but the conclusions can be extended to partial encryption of video and other types of media.


visual communications and image processing | 1997

Low-complexity waveform coding via alphabet and sample-set partitioning

Amir Said; William A. Pearlman

We propose a new low-complexity entropy-coding method to be used for coding waveform signals. It is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropy-coding process; (2) a new recursive set partitioning entropy-coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs. Numerical results with its application for lossy and lossless image compression show the efficacy of the new method, comparable to the best known methods.


international conference on image processing | 1999

Simplified segmentation for compound image compression

Amir Said; Alexander I. Drukarev

There are three basic segmentation schemes for compound image compression: object-based, layer-based, and block-based. This paper discusses the relative advantages of each scheme and architecture, and studies the use of fast classification techniques for a segmentation that can be used together with a chosen compression architecture. Particularly, we consider classification techniques working on approximate object boundaries, which reaches the localization and precision of the segmentation, but in exchange allows faster, one-pass segmentation, low memory requirements, and segmentation map that is better matched to existing compression methods. We show numerical results obtained on a printer application environment, where rigorous standards of visual quality have to be satisfied.


international conference on acoustics, speech, and signal processing | 2000

SBHP-a low complexity wavelet coder

Christos Chrysafis; Amir Said; Alexander I. Drukarev; Asad Islam; William A. Pearlman

We present a low-complexity entropy coder originally designed to work in the JPEG2000 image compression standard framework. The algorithm is meant for embedded and non-embedded coding of wavelet coefficients inside a subband, and is called subband-block hierarchical partitioning (SBHP). It was extensively tested following the standard experiment procedures, and it was shown to yield a significant reduction in the complexity of entropy coding, with small loss in compression performance. Furthermore, it is able to seamlessly support all JPEG2000 features. We present a description of the algorithm, an analysis of its complexity, and a summary of the results obtained after its integration into the verification model (VM).


IEEE Transactions on Information Theory | 1998

Bandwidth-efficient coded modulation with optimized linear partial-response signals

Amir Said; John B. Anderson

We study the design of optimal signals for bandwidth-efficient linear coded modulation. Previous results show that for linear channels with intersymbol interference (ISI), reduced-search decoding algorithms have near-maximum-likelihood error performance, but with much smaller complexity than the Viterbi decoder. Consequently, the controlled ISI introduced by a lowpass filter can be practically used for bandwidth reduction. Such spectrum shaping filters comprise an explicit coded modulation, for which we seek the optimal design. We simultaneously constrain the bandwidth and maximize the minimum Euclidean distance between signals. We show that under quite general assumptions the problem can be formulated as a linear program, and solved with well-known efficient optimization techniques. Numerical results are presented, and the performance of the optimal signals, measured by their combined bandwidth and noise immunity, is analyzed. The new codes are comparable to set-partition (TCM) trellis codes. Tests of an M-algorithm decoder confirm this and show that the performance occurs at small detection complexity.

Collaboration


Dive into the Amir Said's collaboration.

Top Co-Authors

Avatar

William A. Pearlman

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz W. P. Biscainho

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Sergio L. Netto

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge