Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alon Orlitsky is active.

Publication


Featured researches published by Alon Orlitsky.


IEEE Transactions on Information Theory | 1998

Zero-error information theory

János Körner; Alon Orlitsky

The problem of error-free transmission capacity of a noisy channel was posed by Shannon in 1956 and remains unsolved, Nevertheless, partial results for this and similar channel and source coding problems have had a considerable impact on information theory, computer science, and mathematics. We review the techniques, results, information measures, and challenges encountered in this ongoing quest.


IEEE Transactions on Information Theory | 2004

Universal compression of memoryless sources over unknown alphabets

Alon Orlitsky; Narayana P. Santhanam; Junan Zhang

It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern-the order in which the symbols appear. Concentrating on the latter, we show that the patterns of i.i.d. strings over all, including infinite and even unknown, alphabets, can be compressed with diminishing redundancy, both in block and sequentially, and that the compression can be performed in linear time. To establish these results, we show that the number of patterns is the Bell number, that the number of patterns with a given number of symbols is the Stirling number of the second kind, and that the redundancy of patterns can be bounded using results of Hardy and Ramanujan on the number of integer partitions. The results also imply an asymptotically optimal solution for the Good-Turing probability-estimation problem.


IEEE Transactions on Information Theory | 1990

Worst-case interactive communication. I. Two messages are almost optimal

Alon Orlitsky

The reduction in communication achievable by interaction is investigated. The model assumes two communicators: an informant having a random variable X, and a recipient having a possibly dependent random variable Y. Both communicators want the recipient to learn X with no probability of error, whereas the informant may or may not learn Y. To that end, they alternate in transmitting messages comprising finite sequences of bits. Messages are transmitted over an error-free channel and are determined by an agreed-upon, deterministic protocol for (X,Y) (i.e. a protocol for transmitting X to a person who knows Y). A two-message protocol is described, and its worst case performance is investigated. >


international symposium on information theory | 2002

Stopping sets and the girth of Tanner graphs

Alon Orlitsky; R. Urbanke; Krishnamurthy Viswanathan; Junan Zhang

Recent work has related the error probability of iterative decoding over erasure channels to the presence of stopping sets in the Tanner graph of the code used. In particular, it was shown that the smallest number of uncorrected erasures is the size of the graphs smallest stopping set. Relating stopping sets and girths, we consider the size /spl sigma/(d,g) of the smallest stopping set in any bipartite graph of girth g and left degree d. For g/spl les/8 and any d, we determine /spl sigma/(d,g) exactly. For larger gs we bound /spl sigma/(d,g) in terms of d, showing that for fixed d, /spl sigma/(d,g) grows exponentially with g. Since constructions of high-girth graphs are known, one can therefore design codes with good erasure-correction guarantees under iterative decoding.


foundations of computer science | 1995

Coding for computing

Alon Orlitsky; James R. Roche

A sender communicates with a receiver who wishes to reliably evaluate a function of their combined data. We show that if only the sender can transmit, the number of bits required is a conditional entropy of a naturally defined graph. We also determine the number of bits needed when the communicators exchange two messages.


Archive | 1994

Theoretical Advances in Neural Computation and Learning

Vwani P. Roychowdhury; Kai-Yeung Siu; Alon Orlitsky

Foreword B. Widrow. Foreword D.E. Rummelhart. Preface. Part I: Computational Complexity of Neural Networks. 1. Neural Models and Spectral Methods V. Roychowdhury, Kai-Yeung Siu, A. Orlitsky. 2. Depth-Efficient Threshold Circuits for Arithmetic Functions T. Hofmeister. 3. Communication Complexity and Lower Bounds for Threshold Circuits M. Goldmann. 4. A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits W. Maass, G. Schnitger, E.D. Sontag. 5. Computing on Analog Neural Nets with Arbitrary Real Weights W. Maass. 6. Connectivity versus Capacity in the Hebb Rule S.S. Venkatesh. Part II: Learning and Neural Networks. 7. Computational Learning Theory and Neural Networks: a Survey of Selected Topics G. Turan. 8. Perspectives of Current Research about the Complexity of Learning on Neural Nets W. Maass. 9. Learning an Intersection of K Halfspaces over a Uniform Distribution A.L. Blum, R. Kannan. 10. On the Intractability of Loading Neural Networks B. DasGupta, H.T. Siegelmann, E. Sontag. 11. Learning Boolean Functions via the Fourier Transform Y. Mansour. 12. LMS and Backpropagation are Minimax Filters B. Hassibi, A.H. Sayed, T. Kailath. 13. Supervised Learning: Can it Escape its Local Minimum? P.J. Werbos. Index.


IEEE Transactions on Information Theory | 1993

Privacy, additional information and communication

Reuven Bar-Yehuda; Benny Chor; Eyal Kushilevitz; Alon Orlitsky

Two parties, each holding one input of a two-variable function, communicate in order to determine the value of the function. Each party wants to expose as little of its input as possible to the other party. The authors prove tight bounds on the minimum amount of information about the individual inputs that must be revealed in the computation of most functions and of some specific ones. They also show that a computation that reveals little information about the individual inputs may require many more message exchanges than a more revealing computation. >


foundations of computer science | 2003

Always Good Turing: asymptotically optimal probability estimation

Alon Orlitsky; Narayana P. Santhanam; Junan Zhang

While deciphering the German Enigma code during World War II, I.J. Good and A.M. Turing considered the problem of estimating a probability distribution from a sample of data. They derived a surprising and unintuitive formula that has since been used in a variety of applications and studied by a number of researchers. Borrowing an information-theoretic and machine-learning framework, we define the attenuation of a probability estimator as the largest possible ratio between the per-symbol probability assigned to an arbitrarily-long sequence by any distribution, and the corresponding probability assigned by the estimator. We show that some common estimators have infinite attenuation and that the attenuation of the Good-Turing estimator is low, yet larger than one. We then derive an estimator whose attenuation is one, namely, as the length of any sequence increases, the per-symbol probability assigned by the estimator is at least the highest possible. Interestingly, some of the proofs use celebrated results by Hardy and Ramanujan on the number of partitions of an integer. To better understand the behavior of the estimator, we study the probability it assigns to several simple sequences. We show that some sequences this probability agrees with our intuition, while for others it is rather unexpected.


Journal of Statistical Physics | 1990

Monte Carlo generation of self-avoiding walks with fixed endpoints and fixed length

Neal Madras; Alon Orlitsky; L. A. Shepp

We propose a new class of dynamic Monte Carlo algorithms for generating self-avoiding walks uniformly from the ensemble with fixed endpoints and fixed length in any dimension, and prove that these algorithms are ergodic in all cases. We also prove the ergodicity of a variant of the pivot algorithm.


SIAM Journal on Discrete Mathematics | 1993

Interactive communication of balanced distributions and of correlated files

Alon Orlitsky

Collaboration


Dive into the Alon Orlitsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jayadev Acharya

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Narayana P. Santhanam

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirakendu Das

University of California

View shared research outputs
Top Co-Authors

Avatar

Shengjun Pan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikola Jevtic

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge