Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rina Dechter is active.

Publication


Featured researches published by Rina Dechter.


Artificial Intelligence | 1987

Network-based heuristics for constraint satisfaction problems

Rina Dechter; Judea Pearl

Abstract Many AI tasks can be formulated as constraint-satisfaction problems (CSP), i.e., the assignment of values to variables subject to a set of constraints. While some CSPs are hard, those that are easy can often be mapped into sparse networks of constraints which, in the extreme case, are trees. This paper identifies classes of problems that lend themselves to easy solutions, and develops algorithms that solve these problems optimally. The paper then presents a method of generating heuristic advice to guide the order of value assignments based on both the sparseness found in the constraint network and the simplicity of tree-structured CSPs. The advice is generated by simplifying the pending subproblems into trees, counting the number of consistent solutions in each simplified subproblem, and comparing these counts to decide among the choices pending in the original problem.


Journal of the ACM | 1985

Generalized best-first search strategies and the optimality of A*

Rina Dechter; Judea Pearl

This paper reports several properties of heuristic best-first search strategies whose scoring functions ƒ depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A* retain their form (with the minmax of f playing the role of the optimal cost), which helps establish general tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each class-domain combination, we then identify the strongest type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h ≤ h*). On this class, A* is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A* is indeed optimal. Additionally, A* is also shown to be optimal over a subset of the latter class containing all best-first algorithms that are guided by path-dependent evaluation functions.


Artificial Intelligence | 1989

Tree clustering for constraint networks (research note)

Rina Dechter; Judea Pearl

Abstract The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.


Artificial Intelligence | 1999

Bucket elimination: a unifying framework for reasoning

Rina Dechter

Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem’s interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set ,o r acutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.


Artificial Intelligence | 1990

Enhancement schemes for constraint processing: backjumping, learning, and cutset decomposition

Rina Dechter

Abstract Researchers in the areas of constraint satisfaction problems, logic programming, and truth maintenance systems have suggested various schemes for enhancing the performance of the backtracking algorithm. This paper defines and compares the performance of three such schemes: “backjumping,” “learning,” and “cycle-cutset.” The backjumping and the cycle-cutset methods work best when the constraint graph is sparse, while the learning scheme mostly benefits problem instances with dense constraint graphs. An integrated strategy is described which utilizes the distinct advantages of each scheme. Experiments show that, in hard problems, the average improvement realized by the integrated scheme is 20–25% higher than any of the individual schemes.


Artificial Intelligence | 1989

Research noteTree clustering for constraint networks

Rina Dechter; Judea Pearl

Abstract The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.


Annals of Mathematics and Artificial Intelligence | 1994

Propositional semantics for disjunctive logic programs

Rachel Ben-Eliyahu; Rina Dechter

In this paper we study the properties of the class of head-cycle-free extended disjunctive logic programs (HEDLPs), which includes, as a special case, all nondisjunctive extended logic programs. We show that any propositional HEDLP can be mapped in polynomial time into a propositional theory such that each model of the latter corresponds to an answer set, as defined by stable model semantics, of the former. Using this mapping, we show that many queries over HEDLPs can be determined by solving propositional satisfiability problems. Our mapping has several important implications: It establishes the NP-completeness of this class of disjunctive logic programs; it allows existing algorithms and tractable subsets for the satisfiability problem to be used in logic programming; it facilitates evaluation of the expressive power of disjunctive logic programs; and it leads to the discovery of useful similarities between stable model semantics and Clarks predicate completion.


Artificial Intelligence | 2007

AND/OR search spaces for graphical models

Rina Dechter; Robert Mateescu

The paper introduces an AND/OR search space perspective for graphical models that include probabilistic networks (directed or undirected) and constraint networks. In contrast to the traditional (OR) search space view, the AND/OR search tree displays some of the independencies present in the graphical model explicitly and may sometimes reduce the search space exponentially. Indeed, most algorithmic advances in search-based constraint processing and probabilistic inference can be viewed as searching an AND/OR search tree or graph. Familiar parameters such as the depth of a spanning tree, treewidth and pathwidth are shown to play a key role in characterizing the effect of AND/OR search graphs vs. the traditional OR search graphs. We compare memory intensive AND/OR graph search with inference methods, and place various existing algorithms within the AND/OR search space.


Journal of the ACM | 2003

Mini-buckets: A general scheme for bounded inference

Rina Dechter; Irina Rish

This article presents a class of approximation algorithms that extend the idea of bounded-complexity inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algorithms. This yields a parameterized scheme, called mini-buckets, that offers adjustable trade-off between accuracy and efficiency. The mini-bucket approach to optimization problems, such as finding the most probable explanation (MPE) in Bayesian networks, generates both an approximate solution and bounds on the solution quality. We present empirical results demonstrating successful performance of the proposed approximation scheme for the MPE task, both on randomly generated problems and on realistic domains such as medical diagnosis and probabilistic decoding.


Journal of the ACM | 1995

On the minimality and global consistency of row-convex constraint networks

Peter van Beek; Rina Dechter

Constraint networks have been shown to be useful in formulating such diverse problems as scene labeling, natural language parsing, and temporal reasoning. Given a constraint network, we often wish to (i) find a solution that satisfies the constraints and (ii) find the corresponding minimal network where the constraints are as explicit as possible. Both tasks are known to be NP-complete in the general case. Task (1) is usually solved using a backtracking algorithm, and task (ii) is often solved only approximately by enforcing various levels of local consistency. In this paper, we identify a property of binary constraint called row convexity and show its usefulness in deciding when a form of local consistency called path consistency is sufficient to guarantee that a network is both minimal and globally consistent. Globally consistent networks have the property that a solution can be found without backtracking. We show that one can test for the row convexity property efficiently and we show, by examining applications of constraint networks discussed in the literature, that our results are useful in practice. Thus, we identify a class of binary constraint networks for which we can solve both tasks (i) and (ii) efficiently. Finally, we generalize the results for binary constraint networks to networks with nonbinary constraints.

Collaboration


Dive into the Rina Dechter's collaboration.

Top Co-Authors

Avatar

Kalev Kask

University of California

View shared research outputs
Top Co-Authors

Avatar

Vibhav Gogate

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Judea Pearl

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Otten

University of California

View shared research outputs
Top Co-Authors

Avatar

Bozhena Bidyuk

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier Larrosa

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge