Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Dean is active.

Publication


Featured researches published by Thomas Dean.


computational intelligence | 1989

A Model for Reasoning About Persistence and Causation

Thomas Dean; Keiji Kanazawa

Reasoning about change requires predicting how long a proposition, having become true, will continue to be so. Lacking perfect knowledge, an agent may be constrained to believe that a proposition persists indefinitely simply because there is no way for the agent to infer a contravening proposition with certainty. In this paper, we describe a model of causal reasoning that accounts for knowledge concerning cause‐and‐effect relationships and knowledge concerning the tendency for propositions to persist or not as a function of time passing. Our model has a natural encoding in the form of a network representation for probabilistic models. We consider the computational properties of our model by reviewing recent advances in computing the consequences of models encoded in this network representation. Finally, we discuss how our probabilistic model addresses certain classical problems in temporal reasoning (e. g., the frame and qualification problems).


Artificial Intelligence | 1987

Temporal data base management

Thomas Dean; Drew V. McDermott

Abstract Reasoning about time typically involves drawing conclusions on the basis of incomplete information. Uncertainty arises in the form of ignorance, indeterminacy, and indecision. Despite the lack of complete information, a problem solver is continually forced to make predictions in order to pursue hypotheses and plan for the future. Such predictions are frequently contravened by subsequent evidence. This paper presents a computational approach to temporal reasoning that directly confronts these issues. The approach centers around techniques for managing a data base of assertions corresponding to the occurrence of events and the persistence of their effects over time. The resulting computational framework performs the temporal analog of (static) reason maintenance by keeping track of dependency information involving assumptions about the truth of facts spanning various intervals of time. The system described in this paper extends classical predicate-calculus data bases, such as those used by PROLOG, to deal with time in an efficient and natural manner.


computer vision and pattern recognition | 2013

Fast, Accurate Detection of 100,000 Object Classes on a Single Machine

Thomas Dean; Mark A. Ruzon; Mark Segal; Jonathon Shlens; Sudheendra Vijayanarasimhan; Jay Yagnik

Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an objects appearance, such as the presence of component parts. We exploit locality-sensitive hashing to replace the dot-product kernel operator in the convolution with a fixed number of hash-table probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM. This represents a speed-up of approximately 20,000 times - four orders of magnitude - when compared with performing the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20% of the classes.


Artificial Intelligence | 2000

Bounded-parameter Markov decision process

Robert Givan; Sonia M. Leach; Thomas Dean

In this paper, we introduce the notion of a {\em bounded parameter Markov decision process\/} as a generalization of the traditional {\em exact\/} MDP. A bounded parameter MDP is a set of exact MDPs specified by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). Bounded parameter MDPs can be used to represent variation or uncertainty concerning the parameters of sequential decision problems. Bounded parameter MDPs can also be used in aggregation schemes to represent the variation in the transition probabilities for different base states aggregated together in the same aggregate state. We introduce {\em interval value functions\/} as a natural extension of traditional value functions. An interval value function assigns a closed real interval to each state, representing the assertion that the value of that state falls within that interval. An interval value function can be used to bound the performance of a policy over the set of exact MDPs associated with a given bounded parameter MDP. We describe an iterative dynamic programming algorithm called {\em interval policy evaluation\/} which computes an interval value function for a given bounded parameter MDP and specified policy. Interval policy evaluation on a policy


Journal of Artificial Intelligence Research | 1999

Decision-theoretic planning: structural assumptions and computational leverage

Craig Boutilier; Thomas Dean; Steve Hanks

policy


Artificial Intelligence | 1994

Deliberation scheduling for problem solving in time-constrained environments

Mark S. Boddy; Thomas Dean

computes the most restrictive interval value function that is sound, i.e. that bounds the value function for


ACM Computing Surveys | 1996

Automated planning

Thomas Dean

policy


Artificial Intelligence | 1988

Reasoning about partially ordered events

Thomas Dean; Mark S. Boddy

in every exact MDP in the set defined by the bounded parameter MDP. A simple modification of interval policy evaluation results in a variant of value iteration [Bellman57] that we call {\em interval value iteration\/} which computes a policy for an bounded parameter MDP that is optimal in a well-defined sense.


computational intelligence | 1988

Hierarchical planning involving deadlines, travel time, and resources

Thomas Dean; R. James Firby; David P. Miller

Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques-- in particular those based on the use of structured, intensional representations--can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations.


national conference on artificial intelligence | 1992

Inferring finite automata with stochastic output functions and an application to map learning

Thomas Dean; Dana Angluin; Kenneth Basye; Sean P. Engelson; Leslie Pack Kaelbling; Evangelos Kokkevis; Oded Maron

Abstract We are interested in the problem faced by an agent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problem-solving tasks in which the time spent in decision making affects the quality of the responses generated by a system. This paper describes an approach to designing systems that are capable of taking their own computational resources into consideration during planning and problem solving. In particular, we address the design of systems that manage their computational resources by using expectations about the performance of decision-making procedures and preferences over the outcomes resulting from applying those procedures. Our approach is called deliberation scheduling. Deliberation scheduling involves the explicit allocation of computational resources to decision-making procedures based on the expected effect of those allocations on the systems performance.

Collaboration


Dive into the Thomas Dean's collaboration.

Top Co-Authors

Avatar

Leslie Pack Kaelbling

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge