Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tor Lattimore is active.

Publication


Featured researches published by Tor Lattimore.


algorithmic learning theory | 2012

PAC bounds for discounted MDPs

Tor Lattimore; Marcus Hutter

We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (mdps). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning (ucrl) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors provided the transition matrix is not too dense.


algorithmic learning theory | 2011

Asymptotically optimal agents

Tor Lattimore; Marcus Hutter

Artificial general intelligence aims to create agents capable of learning to solve arbitrary interesting problems. We define two versions of asymptotic optimality and prove that no agent can satisfy the strong version while in some cases, depending on discounting, there does exist a non-computable weak asymptotically optimal agent.


arXiv: Learning | 2013

No Free Lunch versus Occam’s Razor in Supervised Learning

Tor Lattimore; Marcus Hutter

The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing misclassification rates.


algorithmic learning theory | 2013

Universal Knowledge-Seeking Agents for Stochastic Environments

Laurent Orseau; Tor Lattimore; Marcus Hutter

We define an optimal Bayesian knowledge-seeking agent, KL-KSA, designed for countable hypothesis classes of stochastic environments and whose goal is to gather as much information about the unknown world as possible. Although this agent works for arbitrary countable classes and priors, we focus on the especially interesting case where all stochastic computable environments are considered and the prior is based on Solomonoff’s universal prior. Among other properties, we show that KL-KSA learns the true environment in the sense that it learns to predict the consequences of actions it does not take. We show that it does not consider noise to be information and avoids taking actions leading to inescapable traps. We also present a variety of toy experiments demonstrating that KL-KSA behaves according to expectation.


algorithmic learning theory | 2011

Time consistent discounting

Tor Lattimore; Marcus Hutter

A possibly immortal agent tries to maximise its summed discounted rewards over time, where discounting is used to avoid infinite utilities and encourage the agent to value current rewards more than future ones. Some commonly used discount functions lead to time-inconsistent behavior where the agent changes its plan over time. These inconsistencies can lead to very poor behavior. We generalise the usual discounted utility model to one where the discount function changes with the age of the agent. We then give a simple characterisation of time- (in)consistent discount functions and show the existence of a rational policy for an agent that knows its discount function is time-inconsistent.


congress on evolutionary computation | 2014

Free Lunch for Optimisation under the Universal Distribution

Tom Everitt; Tor Lattimore; Marcus Hutter

Function optimisation is a major challenge in computer science. The No Free Lunch theorems state that if all functions with the same histogram are assumed to be equally probable then no algorithm outperforms any other in expectation. We argue against the uniform assumption and suggest a universal prior exists for which there is a free lunch, but where no particular class of functions is favoured over another. We also prove upper and lower bounds on the size of the free lunch.


Theoretical Computer Science | 2014

Near-optimal PAC bounds for discounted MDPs

Tor Lattimore; Marcus Hutter

Abstract We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes ( mdp s). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning ( ucrl ) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors provided the transition matrix is not too dense.


algorithmic learning theory | 2014

On Learning the Optimal Waiting Time

Tor Lattimore; András György; Csaba Szepesvári

Consider the problem of learning how long to wait for a bus before walking, experimenting each day and assuming that the bus arrival times are independent and identically distributed random variables with an unknown distribution. Similar uncertain optimal stopping problems arise when devising power-saving strategies, e.g., learning the optimal disk spin-down time for mobile computers, or speeding up certain types of satisficing search procedures by switching from a potentially fast search method that is unreliable, to one that is reliable, but slower. Formally, the problem can be described as a repeated game. In each round of the game an agent is waiting for an event to occur. If the event occurs while the agent is waiting, the agent suffers a loss that is the sum of the event’s “arrival time” and some fixed loss. If the agents decides to give up waiting before the event occurs, he suffers a loss that is the sum of the waiting time and some other fixed loss. It is assumed that the arrival times are independent random quantities with the same distribution, which is unknown, while the agent knows the loss associated with each outcome. Two versions of the game are considered. In the full information case the agent observes the arrival times regardless of its actions, while in the partial information case the arrival time is observed only if it does not exceed the waiting time. After some general structural observations about the problem, we present a number of algorithms for both cases that learn the optimal weighting time with nearly matching minimax upper and lower bounds on their regret.


algorithmic learning theory | 2014

Bayesian Reinforcement Learning with Exploration

Tor Lattimore; Marcus Hutter

We consider a general reinforcement learning problem and show that carefully combining the Bayesian optimal policy and an exploring policy leads to minimax sample-complexity bounds in a very general class of (history-based) environments. We also prove lower bounds and show that the new algorithm displays adaptive behaviour when the environment is easier than worst-case.


algorithmic learning theory | 2013

Concentration and Confidence for Discrete Bayesian Sequence Predictors

Tor Lattimore; Marcus Hutter; Peter Sunehag

Bayesian sequence prediction is a simple technique for predicting future symbols sampled from an unknown measure on infinite sequences over a countable alphabet. While strong bounds on the expected cumulative error are known, there are only limited results on the distribution of this error. We prove tight high-probability bounds on the cumulative error, which is measured in terms of the Kullback-Leibler (KL) divergence. We also consider the problem of constructing upper confidence bounds on the KL and Hellinger errors similar to those constructed from Hoeffding-like bounds in the i.i.d. case. The new results are applied to show that Bayesian sequence prediction can be used in the Knows What It Knows (KWIK) framework with bounds that match the state-of-the-art.

Collaboration


Dive into the Tor Lattimore's collaboration.

Top Co-Authors

Avatar

Marcus Hutter

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Dann

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Emma Brunskill

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shuai Li

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Koby Crammer

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Leike

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge