Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tamás Linder is active.

Publication


Featured researches published by Tamás Linder.


IEEE Transactions on Information Theory | 1994

On the asymptotic tightness of the Shannon lower bound

Tamás Linder; Ram Zamir

New results are proved on the convergence of the Shannon (1959) lower bound to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the Shannon lower bound is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy and a finite /spl alpha/ th moment for some /spl alpha/>0, with respect to the given norm. Moreover, we derive a theorem of Linkov (1965) on the asymptotic tightness of the Shannon lower bound for general difference distortion measures with more relaxed conditions on the source density. We also show that the Shannon lower bound relative to a stationary source and single-letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. >


international symposium on information theory | 1994

Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding

Tamás Linder; Gábor Lugosi; Kenneth Zeger

Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real-valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mean-square error (MSE) with respect to m training vectors, then its MSE for the true source converges in expectation and almost surely to the minimum possible MSE as O(/spl radic/(log m/m)). (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)). (3) There exists a fixed-rate universal lossy source coding scheme whose per-letter MSE on a real-valued source samples converges in expectation and almost surely to the distortion-rate function D(R) as O((/spl radic/(loglog n/log n)). (4) Consider a training set of n real-valued source samples blocked into vectors of dimension k, and a k-dimension vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the per-letter MSE of this quantizer for the true source converges in expectation and almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n))), if one chooses k=[(1/R)(1-/spl epsiv/)log n] for any /spl epsiv//spl isin/(0.1). >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Fast nearest-neighbor search in dissimilarity spaces

András Faragó; Tamás Linder; Gábor Lugosi

A fast nearest-neighbor algorithm is presented. It works in general spaces in which the known cell techniques cannot be implemented for various reasons, such as the absence of coordinate structure or high dimensionality. The central idea has already appeared several times in the literature with extensive computer simulation results. An exact probabilistic analysis of this family of algorithms that proves its O(1) asymptotic average complexity measured in the number of dissimilarity calculations is presented. >


IEEE Transactions on Neural Networks | 1998

Radial basis function networks and complexity regularization in function learning

Adam Krzyzak; Tamás Linder

In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.


IEEE Transactions on Neural Networks | 1996

Nonparametric estimation and classification using radial basis function nets and empirical risk minimization

Adam Krzyzak; Tamás Linder; C. Lugosi

Studies convergence properties of radial basis function (RBF) networks for a large class of basis functions, and reviews the methods and results related to this topic. The authors obtain the network parameters through empirical risk minimization. The authors show the optimal nets to be consistent in the problem of nonlinear function approximation and in nonparametric classification. For the classification problem the authors consider two approaches: the selection of the RBF classifier via nonlinear function estimation and the direct method of minimizing the empirical error probability. The tools used in the analysis include distribution-free nonasymptotic probability inequalities and covering numbers for classes of functions.


IEEE Transactions on Information Theory | 2000

Optimal entropy-constrained scalar quantization of a uniform source

András György; Tamás Linder

Optimal scalar quantization subject to an entropy constraint is studied for a wide class of difference distortion measures including rth-power distortions with r>0. It is proved that if the source is uniformly distributed over an interval, then for any entropy constraint R (in nats), an optimal quantizer has N=[e/sup R/] interval cells such that N-1 cells have equal length d and one cell has length c/spl les/d. The cell lengths are uniquely determined by the requirement that the entropy constraint is satisfied with equality. Based on this result, a parametric representation of the minimum achievable distortion D/sub h/(R) as a function of the entropy constraint R is obtained for a uniform source. The D/sub h/(R) curve turns out to be nonconvex in general. Moreover, for the squared-error distortion it is shown that D/sub h/(R) is a piecewise-concave function, and that a scalar quantizer achieving the lower convex hull of D/sub h/(R) exists only at rates R=log N, where N is a positive integer.


international symposium on information theory | 1997

The minimax distortion redundancy in empirical quantizer design

Peter L. Bartlett; Tamás Linder; Gábor Lugosi

We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean-squared distortion of a vector quantizer designed from n independent and identically distributed (i.i.d.) data points using any design algorithm is at least /spl Omega/(n/sup -1/2/) away from the optimal distortion for some distribution on a bounded subset of /spl Rscr//sup d/. Together with existing upper bounds this result shows that the minimax distortion redundancy for empirical quantizer design, as a function of the size of the training data, is asymptotically on the order of n/sup -1/2/. We also derive a new upper bound for the performance of the empirically optimal quantizer.


IEEE Transactions on Information Theory | 2001

A zero-delay sequential scheme for lossy coding of individual sequences

Tamás Linder; G. Lagosi

We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean-squared distortion. The encoder and the decoder are connected via a noiseless channel of capacity R and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length n, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalar quantizer of rate R which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate n/sup -1/5/ log n as the length of the encoded sequence n increases without bound.


IEEE Transactions on Information Theory | 1994

Asymptotic entropy-constrained performance of tessellating and universal randomized lattice quantization

Tamás Linder; Kenneth Zeger

Two results are given. First, using a result of Csiszar (1973) the asymptotic (i.e., high-resolution/low distortion) performance for entropy-constrained tessellating vector quantization, heuristically derived by Gersho (1979), is proven for all sources with finite differential entropy. This implies, using Gershos conjecture and Zadors formula, that tessellating vector quantizers are asymptotically optimal for this broad class of sources, and generalizes a rigorous result of Gish and Pierce (1968) from the scalar to the vector case. Second, the asymptotic performance is established for Zamir and Feders (1992) randomized lattice quantization. With the only assumption that the source has finite differential entropy, it is proven that the low-distortion performance of the Zamir-Feder universal vector quantizer is asympotically the same as that of the deterministic lattice quantizer. >


IEEE Transactions on Signal Processing | 2004

Efficient adaptive algorithms and minimax bounds for zero-delay lossy source coding

András György; Tamás Linder; Gábor Lugosi

Zero-delay lossy source coding schemes are considered for both individual sequences and random sources. Performance is measured by the distortion redundancy, which is defined as the difference between the normalized cumulative mean squared distortion of the scheme and the normalized cumulative distortion of the best scalar quantizer of the same rate that is matched to the entire sequence to be encoded. By improving and generalizing a scheme of Linder and Lugosi, Weissman and Merhav showed the existence of a randomized scheme that, for any bounded individual sequence of length n, achieves a distortion redundancy O(n/sup -1/3/logn). However, both schemes have prohibitive complexity (both space and time), which makes practical implementation infeasible. In this paper, we present an algorithm that computes Weissman and Merhavs scheme efficiently. In particular, we introduce an algorithm with encoding complexity O(n/sup 4/3/) and distortion redundancy O(n/sup -1/3/logn). The complexity can be made linear in the sequence length n at the price of increasing the distortion redundancy to O(n/sup -1/4//spl radic/logn). We also consider the problem of minimax distortion redundancy in zero-delay lossy coding of random sources. By introducing a simplistic scheme and proving a lower bound, we show that for the class of bounded memoryless sources, the minimax expected distortion redundancy is upper and lower bounded by constant multiples of n/sup -1/2/.

Collaboration


Dive into the Tamás Linder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth Zeger

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge