Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam J. Grove is active.

Publication


Featured researches published by Adam J. Grove.


Artificial Intelligence | 1996

From statistical knowledge bases to degrees of belief

Fahiem Bacchus; Adam J. Grove; Joseph Y. Halpern; Daphne Koller

An intelligent agent will often be uncertain about various properties of its environment, and when acting in that environment it will frequently need to quantify its uncertainty. For example, if the agent wishes to employ the expected-utility paradigm of decision theory to guide its actions, it will need to assign degrees of belief (subjective probabilities) to various assertions. Of course, these degrees of belief should not be arbitrary, but rather should be based on the information available to the agent. This paper describes one approach for inducing degrees of belief from very rich knowledge bases, that can include information about particular individuals, statistical correlations, physical laws, and default rules. We call our approach the random-worlds method. The method is based on the principle of indifference: it treats all of the worlds the agent considers possible as being equally likely. It is able to integrate qualitative default reasoning with quantitative probabilistic reasoning by providing a language in which both types of information can be easily expressed. Our results show that a number of desiderata that arise in direct inference (reasoning from statistical information to conclusions about individuals) and default reasoning follow directly from the semantics of random worlds. For example, random worlds captures important patterns of reasoning such as specificity, inheritance, indifference to irrelevant information, and default assumptions of independence. Furthermore, the expressive power of the language used and the intuitive semantics of random worlds allow the method to deal with problems that are beyond the scope of many other nondeductive reasoning systems.


conference on learning theory | 1997

General convergence results for linear discriminant updates

Adam J. Grove; Nick Littlestone; Dale Schuurmans

The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of “quasi-additive” algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.


Machine Learning | 2001

General Convergence Results for Linear Discriminant Updates

Adam J. Grove; Nick Littlestone; Dale Schuurmans

The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of “quasi-additive” algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method “automatically” produces close variants of existing proofs (recovering similar bounds)—thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.


Journal of Artificial Intelligence Research | 1994

Random worlds and maximum entropy

Adam J. Grove; Joseph Y. Halpern; Daphne Koller

Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula Φ holds given KB. If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or first-order models, with domain {1,..., N} that satisfy KB, and compute the fraction of them in which Φ is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying Φ and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum-entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are the limitations on its scope. Most importantly, the restriction to unary predicates seems necessary. Although the random-worlds method makes sense in general, the connection to maximum entropy seems to disappear in the non-unary case. These observations suggest unexpected limitations to the applicability of maximum-entropy methods.


principles and practice of constraint programming | 1995

On the Forward Checking Algorithm

Fahiem Bacchus; Adam J. Grove

The forward checking algorithm for solving constraint satisfaction problems is a popular and successful alternative to backtracking. However, its success has largely been determined empirically, and there has been limited work towards a real understanding of why and when forward checking is the superior approach.


Artificial Intelligence | 1997

Knowing what doesn't matter: exploiting the omission of irrelevant data

Russell Greiner; Adam J. Grove; Alexander Kogan

Abstract Most learning algorithms work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only conditionally irrelevant attribute values, i.e. values that are not needed to classify an instance , given the values of the other unblocked attributes. We first motivate and formalize this model of “superfluous-value blocking”, and then demonstrate that these omissions can be useful, by proving that certain classes that seem hard to learn in the general PAC model—viz., decision trees and DNF formulae—are trivial to learn in this setting. We then extend this model to deal with (1) theory revision (i.e. modifying an existing formula); (2) blockers that occasionally include superfluous values or exclude required values; and (3) other corruptions of the training data.


Journal of Logic and Computation | 1993

Naming and Identity in Epistemic Logics Part I: The Propositional Case

Adam J. Grove; Joseph Y. Halpern

Modal epistemic logics for many agents often assume a xed one-to-one correspondence between agents and the names for agents that occur in the language. This assumption restricts the applicability of any logic because it prohibits, for instance, anonymous agents, agents with many names, named groups of agents, and relative (indexical) reference. Here we examine the principles involved in such cases, and give simple propositional logics that are expressive enough to cope with them all.


Machine Learning | 2001

Linear Concepts and Hidden Variables

Adam J. Grove; Dan Roth

We study a learning problem which allows for a “fair” comparison between unsupervised learning methods—probabilistic model construction, and more traditional algorithms that directly learn a classification. The merits of each approach are intuitively clear: inducing a model is more expensive computationally, but may support a wider range of predictions. Its performance, however, will depend on how well the postulated probabilistic model fits that data. To compare the paradigms we consider a model which postulates a single binary-valued hidden variable on which all other attributes depend. In this model, finding the most likely value of any one variable (given known values for the others) reduces to testing a linear function of the observed values. We learn the model with two techniques: the standard EM algorithm, and a new algorithm we develop based on covariances. We compare these, in a controlled fashion, against an algorithm (a version of Winnow) that attempts to find a good linear classifier directly. Our conclusions help delimit the fragility of using a model that is even “slightly” simpler than the distribution actually generating the data, vs. the relative robustness of directly searching for a good predictor.


Artificial Intelligence | 1995

Naming and identity in epistemic logic part II: a first-order logic for naming

Adam J. Grove

Abstract Modal epistemic logics for many agents sometimes ignore or simplify the distinction between the agents themselves, and the names these agents use when reasoning about each other. We consider problems motivated by practical computer science applications, and show that the simplest theories of naming are often inadequate. The issues we raise are related to some well-known philosophical concerns, such as indexical descriptions, de re knowledge, and the problem of referring to nonexistent objects. However, our emphasis is on epistemic logic as a descriptive tool for distributed systems and artificial intelligence applications, which leads to some nonstandard solutions. The main technical result of this paper is a first-order modal logic, specified both axiomatically and semantically (by a variant of possible-worlds semantics), that is expressive enough to cope with all the difficulties we discuss.


symposium on the theory of computing | 1992

Asymptomatic conditional probabilities for first-order logic

Adam J. Grove; Joseph Y. Halpern; Daphne Koller

Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order formulas. That is, given first-order formulas &fgr; and &thgr;, we consider the number of structures with domain {1,…,N} that satisfy &thgr;, and compute the fraction of them in which &fgr; is true. We then consider what happens to this probability of first-order formulas, except that now we are considering asymptotic conditional probabilities. Although work has been done on special cases of asymptotic conditional probabilities, no general theory has been developed. This is probably due in part to the fact that it has been known that, if there is a binary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We show that in this general case, almost all the questions one might want to ask (such as deciding whether the asymptotic probability exists) are highly undecidable. On the other hand, we show that the situation with unary predicates only is much better. If the vocabulary consists only of unary predicate and constant symbols, it is decidable whether the limit exists, and if it does, there is an effective algorithm for computing it. The complexity depends on two parameters: whether there is a fixed finite vocabulary or an infinite one, and whether there is a bound on the depth of quantifier nesting.

Collaboration


Dive into the Adam J. Grove's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronen Basri

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Arthur L. Delcher

Loyola University Maryland

View shared research outputs
Researchain Logo
Decentralizing Knowledge