Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James A. Storer is active.

Publication


Featured researches published by James A. Storer.


Journal of the ACM | 1982

Data compression via textual substitution

James A. Storer; Thomas G. Szymanski

A general model for data compression which includes most data compression systems in the fiterature as special cases is presented. Macro schemes are based on the principle of finding redundant strings or patterns and replacing them by pointers to a common copy. Different varieties of macro schemes may be defmed by specifying the meaning of a pointer; that is, a pointer may indicate a substring of the compressed string, a substring of the original string, or a substring of some other string such as an external dictionary. Other varieties of macro schemes may be defined by restricting the type of overlapping or recursion that may be used. Trade-offs between different varieties of macro schemes, exact lower bounds on the amount of compression obtainable, and the complexity of encoding and decoding are discussed, as well as how the work of other authors relates to this model.


IEEE Signal Processing Letters | 2005

Low-complexity lossless compression of hyperspectral imagery via linear prediction

Francesco Rizzo; Bruno Carpentieri; Giovanni Motta; James A. Storer

We present a new low-complexity algorithm for hyperspectral image compression that uses linear prediction in the spectral domain. We introduce a simple heuristic to estimate the performance of the linear predictor from a pixel spatial context and a context modeling mechanism with one-band look-ahead capability, which improves the overall compression with marginal usage of additional memory. The proposed method is suitable to spacecraft on-board implementation, where limited hardware and low power consumption are key requirements. Finally, we present a least-squares optimized linear prediction technique that achieves better compression on data cubes acquired by the NASA JPL Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).


Journal of the ACM | 1985

Parallel algorithms for data compression

M. E. Gonzalez Smith; James A. Storer

Parallel algorithms for data compression by textual substitution that are suitable for VLSI implementation are studied. Both “static” and “dynamic” dictionary schemes are considered.


Networks | 1984

On minimal‐node‐cost planar embeddings

James A. Storer

The problem of embedding an undirected graph on the planar grid is considered. Two common cost measures for this sort of problem are the area consumed by the embedding and the total length of edges in the embedding. This paper considers a third cost measure, called the node cost measure, which is the total number of bends that are present along edges of the embedding. The node cost measure has applications to light or microwave circuits where it requires a separate device each time a corner is turned. In addition, this problem has limited applications to traditional circuit and VLSI layout. Although it is shown that finding a minimal node-code embedding is in general an NP-complete problem, three good approximation strategies are given along with worst-case bounds on their performance; one of the strategies is shown to be nearly optimal for a large class of graphs.


Journal of the ACM | 1994

Shortest paths in the plane with polygonal obstacles

James A. Storer; John H. Reif

We present a practical algorithm for finding minimum-length paths between points in the Euclidean plane with (not necessarily convex) polygonal obstacles. Prior to this work, the best known algorithm for finding the shortest path between two points in the plane required <italic>&OHgr;(n<supscrpt>2</supscrpt></italic> log <italic>n)</italic> time and <italic>O</italic>(n<supscrpt>2</supscrpt>) space, where <italic>n</italic> denotes the number of obstacle edges. Assuming that a triangulation or a Voronoi diagram for the obstacle space is provided with the input (if is not, either one can be precomputed in <italic>O</italic>(<italic>n</italic> log <italic>n)</italic> time), we present an <italic>O(kn)</italic> time algorithm, where <italic>k</italic> denotes the number of “islands” (connected components) in the obstacle space. The algorithm uses only <italic>O(n)</italic> space and, given a source point <italic>s</italic>, produces an <italic>O(n)</italic> size data structure such that the distance between <italic>s</italic> and any other point <italic>x</italic> in the plane (<italic>x</italic>) is not necessarily an obstacle vertex or a point on an obstacle edge) can be computed in <italic>O</italic>(1) time. The algorithm can also be used to compute shortest paths for the movement of a disk (so that optimal movement for arbitrary objects can be computed to the accuracy of enclosing them with the smallest possible disk).


data compression conference | 2003

Compression of hyperspectral imagery

Giovanni Motta; Francesco Rizzo; James A. Storer

High dimensional source vectors, such as those that occur in hyperspectral imagery, are partitioned into a number of subvectors of different length and then each subvector is vector quantized (VQ) individually with an appropriate codebook. A locally adaptive partitioning algorithm is introduced that performs comparably in this application to a more expensive globally optimal one that employs dynamic programming. The VQ indices are entropy coded and used to condition the lossless or near-lossless coding of the residual error. Motivated by the need for maintaining uniform quality across all vector components, a percentage maximum absolute error distortion measure is employed. Experiments on the lossless and near-lossless compression of NASA AVIRIS images are presented. A key advantage of the approach is the use of independent small VQ codebooks that allow fast encoding and decoding.


Proceedings of the IEEE | 2000

Lossless image coding via adaptive linear prediction and classification

Giovanni Motta; James A. Storer; Bruno Carpentieri

In past years, there have been several improvements in lossless image compression. All the recently proposed state-of-the-art lossless image compressors can be roughly divided into two categories: single and double-pass compressors. Linear prediction is rarely used in the first category, while TMW, a state-of-the-art double-pass image compressor, relies on linear prediction for its performance. We propose a single-pass adaptive algorithm that uses context classification and multiple linear predictors, locally optimized on a pixel-by-pixel basis. Locality is also exploited in the entropy coding of the prediction error. The results we obtained on a test set of several standard images are encouraging. On the average, our ALPC obtains a compression ratio comparable to CALIC while improving on some images.


data compression conference | 1992

Parallel algorithms for optimal compression using dictionaries with the prefix property

S. De Agostino; James A. Storer

The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string.<<ETX>>


Proceedings of the IEEE | 1994

Improved techniques for single-pass adaptive vector quantization

Cornel Constantinescu; James A. Storer

Constantinescu and Storer presented a new single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; they presented experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. This paper presents improvements in speed (by employing K-D trees), simplicity of codebook entries, and visual quality with no loss in either the amount of compression or the SNR as compared to the original full-search version. >


Journal of Parallel and Distributed Computing | 1991

Processor-efficient hypercube algorithms for the knapsack problem

Jianhua Lin; James A. Storer

Abstract A processor-efficient parallel algorithm is presented for the 0/1 knapsack problem. The algorithm can run on any number of processors specified by a user and, more importantly, it has optimal time speedup and processor efficiency over the best known sequential algorithm. Most of the existing parallel algorithms for the problem have high processor complexity and low processor efficiency. These algorithms are useful when the problem size is relatively small. One parallel algorithm has been proposed to run on fewer processors but it has much higher time complexity. The parallel algorithm proposed here is more efficient and practical even for large problem sizes. Experimental results on the Connection Machine show that the algorithm performs very well for a wide range of input sizes.

Collaboration


Dive into the James A. Storer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Carpentieri

Free University of Bozen-Bolzano

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dana Shapira

Ashkelon Academic College

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge