Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where András György is active.

Publication


Featured researches published by András György.


IEEE Transactions on Information Theory | 2000

Optimal entropy-constrained scalar quantization of a uniform source

András György; Tamás Linder

Optimal scalar quantization subject to an entropy constraint is studied for a wide class of difference distortion measures including rth-power distortions with r>0. It is proved that if the source is uniformly distributed over an interval, then for any entropy constraint R (in nats), an optimal quantizer has N=[e/sup R/] interval cells such that N-1 cells have equal length d and one cell has length c/spl les/d. The cell lengths are uniquely determined by the requirement that the entropy constraint is satisfied with equality. Based on this result, a parametric representation of the minimum achievable distortion D/sub h/(R) as a function of the entropy constraint R is obtained for a uniform source. The D/sub h/(R) curve turns out to be nonconvex in general. Moreover, for the squared-error distortion it is shown that D/sub h/(R) is a piecewise-concave function, and that a scalar quantizer achieving the lower convex hull of D/sub h/(R) exists only at rates R=log N, where N is a positive integer.


IEEE Transactions on Signal Processing | 2004

Efficient adaptive algorithms and minimax bounds for zero-delay lossy source coding

András György; Tamás Linder; Gábor Lugosi

Zero-delay lossy source coding schemes are considered for both individual sequences and random sources. Performance is measured by the distortion redundancy, which is defined as the difference between the normalized cumulative mean squared distortion of the scheme and the normalized cumulative distortion of the best scalar quantizer of the same rate that is matched to the entire sequence to be encoded. By improving and generalizing a scheme of Linder and Lugosi, Weissman and Merhav showed the existence of a randomized scheme that, for any bounded individual sequence of length n, achieves a distortion redundancy O(n/sup -1/3/logn). However, both schemes have prohibitive complexity (both space and time), which makes practical implementation infeasible. In this paper, we present an algorithm that computes Weissman and Merhavs scheme efficiently. In particular, we introduce an algorithm with encoding complexity O(n/sup 4/3/) and distortion redundancy O(n/sup -1/3/logn). The complexity can be made linear in the sequence length n at the price of increasing the distortion redundancy to O(n/sup -1/4//spl radic/logn). We also consider the problem of minimax distortion redundancy in zero-delay lossy coding of random sources. By introducing a simplistic scheme and proving a lower bound, we show that for the class of bounded memoryless sources, the minimax expected distortion redundancy is upper and lower bounded by constant multiples of n/sup -1/2/.


IEEE Transactions on Automatic Control | 2014

Online Markov Decision Processes Under Bandit Feedback

Gergely Neu; András György; Csaba Szepesvári; András Antos

We consider online learning in finite stochastic Markovian environments where in each time step a new reward function is chosen by an oblivious adversary. The goal of the learning agent is to compete with the best stationary policy in hindsight in terms of the total reward received. Specifically, in each time step the agent observes the current state and the reward associated with the last transition, however, the agent does not observe the rewards associated with other state-action pairs. The agent is assumed to know the transition probabilities. The state of the art result for this setting is an algorithm with an expected regret of O(T2/3lnT). In this paper, assuming that stationary policies mix uniformly fast, we show that after T time steps, the expected regret of this algorithm (more precisely, a slightly modified version thereof) is O(T1/2lnT), giving the first rigorously proven, essentially tight regret bound for the problem.


IEEE Transactions on Information Theory | 2008

Tracking the Best Quantizer

András György; Tamás Linder; Gábor Lugosi

An algorithm is presented for online prediction that allows to track the best expert efficiently even when the number of experts is exponentially large, provided that the set of experts has a certain additive structure. As an example, we work out the case where each expert is represented by a path in a directed graph and the loss of each expert is the sum of the weights over the edges in the path. These results are then used to construct universal limited-delay schemes for lossy coding of individual sequences. In particular, we consider the problem of tracking the best scalar quantizer that is adaptively matched to the source sequence with piecewise different behavior. A randomized algorithm is presented which can perform, on any source sequence, asymptotically as well as the best scalar quantization algorithm that is matched to the sequence and is allowed to change the employed quantizer for a given number of times. The complexity of the algorithm is quadratic in the sequence length, but at the price of some deterioration in performance, the complexity can be made linear. Analogous results are obtained for sequential multiresolution and multiple description scalar quantization of individual sequences.


IEEE Transactions on Information Theory | 2005

Individual convergence rates in empirical vector quantizer design

András Antos; László Györfi; András György

We consider the rate of convergence of the expected distortion redundancy of empirically optimal vector quantizers. Earlier results show that the mean-squared distortion of an empirically optimal quantizer designed from n independent and identically distributed (i.i.d.) source samples converges uniformly to the optimum at a rate of O(1//spl radic/n), and that this rate is sharp in the minimax sense. We prove that for any fixed distribution supported on a given finite set the convergence rate is O(1/n) (faster than the minimax lower bound), where the corresponding constant depends on the source distribution. For more general source distributions we provide conditions implying a little bit worse O(logn/n) rate of convergence. Although these conditions, in general, are hard to verify, we show that sources with continuous densities satisfying certain regularity properties (similar to the ones of Pollard that were used to prove a central limit theorem for the code points of the empirically optimal quantizers) are included in the scope of this result. In particular, scalar distributions with strictly log-concave densities with bounded support (such as the truncated Gaussian distribution) satisfy these conditions.


international conference on machine learning | 2013

Online Learning under Delayed Feedback

Pooria Joulani; András György; Csaba Szepesvári

Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, web-based learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewhat surprisingly, it turns out that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. We give meta-algorithms that transform, in a black-box fashion, algorithms developed for the non-delayed case into ones that can handle the presence of delays in the feedback loop. Modifications of the well-known UCB algorithm are also developed for the bandit problem with delayed feedback, with the advantage over the meta-algorithms that they can be implemented with lower complexity.


IEEE Transactions on Information Theory | 2003

Do optimal entropy-constrained quantizers have a finite or infinite number of codewords?

András György; Tamás Linder; Philip A. Chou; Bradley J. Betts

An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). We use the Lagrangian formulation to show the existence and study the structure of optimal entropy-constrained quantizers that achieve a point on the lower convex hull of the operational distortion-rate function D/sub h/(R) = inf/sub Q/{D(Q) : H(Q) /spl les/ R}. In general, an optimal entropy-constrained quantizer may have a countably infinite number of codewords. Our main results show that if the tail of the source distribution is sufficiently light (resp., heavy) with respect to the distortion measure, the Lagrangian-optimal entropy-constrained quantizer has a finite (resp., infinite) number of codewords. In particular, for the squared error distortion measure, if the tail of the source distribution is lighter than the tail of a Gaussian distribution, then the Lagrangian-optimal quantizer has only a finite number of codewords, while if the tail is heavier than that of the Gaussian, the Lagrangian-optimal quantizer has an infinite number of codewords.


international conference on communications | 2011

Early Identification of Peer-to-Peer Traffic

Béla Hullár; Sándor Laki; András György

To manage and monitor their networks in a proper way, network operators are often interested in identifying the applications generating the traffic traveling through their networks, and doing it as fast (i.e., from as few packets) as possible. State-of-the-art packet-based traffic classification methods are either based on the costly inspection of the payload of several packets of each flow or on basic flow statistics that do not take into account the packet content. In this paper we consider the intermediate approach of analyzing only the first few bytes of the first (or first few) packets of each flow. We propose automatic, machine-learning-based methods achieving remarkably good early classification performance on real traffic traces generated from a diverse set of applications (including several versions of P2P TV and file sharing), while requiring only limited computational and memory resources.


conference on learning theory | 2005

Tracking the best of many experts

András György; Tamás Linder; Gábor Lugosi

An algorithm is presented for online prediction that allows to track the best expert efficiently even if the number of experts is exponentially large, provided that the set of experts has a certain structure allowing efficient implementations of the exponentially weighted average predictor. As an example we work out the case where each expert is represented by a path in a directed graph and the loss of each expert is the sum of the weights over the edges in the path.


Journal of Artificial Intelligence Research | 2011

Efficient multi-start strategies for local search algorithms

András György; Levente Kocsis

Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and k-means as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase.

Collaboration


Dive into the András György's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zoltán Benyó

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

Deniz Gunduz

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Balázs Benyó

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

György Ottucsák

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

Levente Kocsis

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Péter Szalay

Budapest University of Technology and Economics

View shared research outputs
Researchain Logo
Decentralizing Knowledge