Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel S. Hirschberg is active.

Publication


Featured researches published by Daniel S. Hirschberg.


ACM Computing Surveys | 1987

Data compression

Debra A. Lelewer; Daniel S. Hirschberg

This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested.


Communications of The ACM | 1975

A linear space algorithm for computing maximal common subsequences

Daniel S. Hirschberg

The problem of finding a longest common subsequence of two strings has been solved in quadratic time and space. An algorithm is presented which will solve this problem in quadratic time and in linear space.


Journal of the ACM | 1976

Bounds on the Complexity of the Longest Common Subsequence Problem

Jeffrey D. Ullman; Alfred V. Aho; Daniel S. Hirschberg

The problem of finding a longest common subsequence of two strings is discussed. This problem arises in data processing applications such as comparing two files and in genetic applications such as studying molecular evolution. The difficulty of computing a longest common subsequence of two strings is examined using the decision tree model of computation, in which vertices represent “equal - unequal” comparisons. It is shown that unless a bound on the total number of distinct symbols is assumed, every solution to the problem can consume an amount of time that is proportional to the product of the lengths of the two strings. A general lower bound as a function of the ratio of alphabet size to string length is derived. The case where comparisons between symbols of the same string are forbidden is also considered and it is shown that this problem is of linear complexity for a two-symbol alphabet and quadratic for an alphabet of three or more symbols.


Communications of The ACM | 1979

Computing connected components on parallel computers

Daniel S. Hirschberg; Ashok K. Chandra; Dilip V. Sarwate

We present a parallel algorithm which uses <italic>n</italic><supscrpt>2</supscrpt> processors to find the connected components of an undirected graph with <italic>n</italic> vertices in time <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>). An <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>) time bound also can be achieved using only <italic>n</italic>⌈<italic>n</italic>/⌈log<subscrpt>2</subscrpt><italic>n</italic>⌉⌉ processors. The algorithm can be used to find the transitive closure of a symmetric Boolean matrix. We assume that the processors have access to a common memory. Simultaneous access to the same location is permitted for fetch instructions but not for store instructions.


Communications of The ACM | 1980

Decentralized extrema-finding in circular configurations of processors

Daniel S. Hirschberg; James B. Sinclair

This note presents an efficient algorithm, requiring <italic>O</italic>(<italic>n</italic> log <italic>n</italic>) message passes, for finding the largest (or smallest) of a set of <italic>n</italic> uniquely numbered processors arranged in a circle, in which no central controller exists and the number of processors is not known a priori.


ACM Computing Surveys | 1985

Self-organizing linear search

James H. Hester; Daniel S. Hirschberg

Algorithms that modify the order of linear search lists are surveyed. First the problem, including assumptions and restrictions, is defined. Next a summary of analysis techniques and measurements that apply to these algorithms is given. The main portion of the survey presents algorithms in the literature with absolute analyses when available. The following section gives relative measures that are applied between two or more algorithms. The final section presents open questions.


Communications of The ACM | 1978

Fast parallel sorting algorithms

Daniel S. Hirschberg

A parallel bucket-sort algorithm is presented that requires time <italic>O</italic>(log <italic>n</italic>) and the use of n processors. The algorithm makes use of a technique that requires more space than the product of processors and time. A realistic model is used in which no memory contention is permitted. A procedure is also presented to sort n numbers in time <italic>O</italic>(<italic>k</italic> log <italic>n</italic>) using <italic>n</italic><supscrpt><italic>1</italic>+<italic>1</italic>/<italic>k</italic></supscrpt> processors, for <italic>k</italic> an arbitrary integer. The model of computation for this procedure permits simultaneous fetches from the same memory location.


Journal of the ACM | 1990

A fast algorithm for optimal length-limited Huffman codes

Lawrence L. Larmore; Daniel S. Hirschberg

An <italic>O</italic>(<italic>nL</italic>)-time algorithm is introduced for constructing an optimal Huffman code for a weighted alphabet of size <italic>n</italic>, where each code string must have length no greater than <italic>L</italic>. The algorithm uses <italic>O</italic>(<italic>n</italic>) space.


Theoretical Computer Science | 1976

Approximate algorithms for some generalized knapsack problems

Ashok K. Chandra; Daniel S. Hirschberg; C. K. Wong

Abstract In this paper we construct approximate algorithms for the following problems: integer multiple-choice knapsack problem, binary multiple-choice knapsack problem and multi-dimensional knapsack problem. The main result can be described as follows: for every e 0 one can construct a polynomial-time algorithm for each of the above problems such that the ratio of the value of the objective function by this algorithm and the optimal value is bounded below by 1 - e.


Communications of The ACM | 1990

Efficient decoding of prefix codes

Daniel S. Hirschberg; Debra A. Lelewer

A special case of the data compression problem is presented, in which a powerful encoder transmits a coded file to a decoder that has severely constrained memory. A data structure that achieves minimum storage is presented, and alternative methods that sacrifice a small amount of storage to attain faster decoding are described.

Collaboration


Dive into the Daniel S. Hirschberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Eppstein

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Baldi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge