Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachel Ben-Eliyahu-Zohary is active.

Publication


Featured researches published by Rachel Ben-Eliyahu-Zohary.


Artificial Intelligence | 1997

Reasoning with minimal models: efficient algorithms and applications

Rachel Ben-Eliyahu-Zohary; Luigi Palopoli

Reasoning with minimal models is at the heart of many knowledge-representation systems. Yet it turns out that this task is formidable, even when very simple theories are considered. In this paper, we introduce the elimination algorithm, which performs, in linear time, minimal model finding and minimal model checking for a significant subclass of positive CNF theories which we call positive head-cycle-free (HCF) theories. We also prove that the task of minimal entailment is easier for positive HCF theories than it is for the class of all positive CNF theories. Finally, we show how variations of the elimination algorithm can be applied to allow queries posed on disjunctive deductive databases and disjunctive default theories to be answered in an efficient way.


Artificial Intelligence | 2000

A modal logic for subjective default reasoning

Shai Ben-David; Rachel Ben-Eliyahu-Zohary

Abstract In this paper we introduce DML: Default Modal Logic. DML is a logic endowed with a two-place modal connective that has the intended meaning of “If α, then normally β”. On top of providing a well-defined tool for analyzing common default reasoning, DML allows nesting of the default operator. We present a semantic framework in which many of the known default proof systems can be naturally characterized, and prove soundness and completeness theorems for several such proof systems. Our semantics is a “neighborhood modal semantics”, and it allows for subjective defaults, that is, defaults may vary among different worlds within the same model. The semantics has an appealing intuitive interpretation and may be viewed as a set-theoretic generalization of the probabilistic interpretations of default reasoning. We show that our semantics is most general in the sense that any modal semantics that is sound for some basic axioms for default reasoning is a special case of our semantics. Such a generality result may serve to provide a semantical analysis of the relative strength of different proof systems and to show the nonexistence of semantics with certain properties.


Annals of Mathematics and Artificial Intelligence | 2003

FlexiMine – A Flexible Platform for KDD Research and Application Development

Rachel Ben-Eliyahu-Zohary; Carmel Domshlak; Ehud Gudes; N. Liusternik; Amnon Meisels; Tzachi Rosen; Solomon Eyal Shimony

FlexiMine is a KDD system designed as a testbed for data-mining research, as well as a generic knowledge discovery tool for varied database domains. Flexibility is achieved by an open-ended design for extensibility, thus enabling integration of existing data-mining algorithms, new locally developed algorithms, and utility functions such as visualization and preprocessing. Support for new databases is simple and clean: the system interfaces with a standard database server via SQL queries and thus can handle any application database. With a view of serving remote, as well as local, users, internet availability was a design goal. By implementing the system in Java, minor modifications allow us to run the user-end of the system either as a Java applications or (with some limitations on the user) as a Java Applet. This paper reviews the architecture, design and operation of FlexiMine and presents some of the new ideas incorporated in the data-mining algorithms (Association rules, Decision trees, Bayesian knowledge-bases and Meta-queries).


symposium on principles of database systems | 2000

Computational properties of metaquerying problems

Fabrizio Angiulli; Rachel Ben-Eliyahu-Zohary; Luigi Palopoli; Giovambattista Ianni

Metaquerying is a datamining technology by which hidden dependencies among several database relations can be discovered. This tool has already been successfully applied to several real-world applications. Recent papers provide only very preliminary results about the complexity of metaquerying. In this paper we define several variants of metaquerying that encompass, as far as we know, all variants defined in the literature. We study both the combined complexity and the data complexity of these variants. We show that, under the combined complexity measure, metaquerying is generally intractable (unless P=NP), but we are able to single out some tractable interesting metaquerying cases (whose combined complexity is LOGCFL-complete). As for the data complexity of metaquerying, we prove that, in general, this is in P, but lies within AC0 in some interesting cases. Finally, we discuss the issue of equivalence between metaqueries, which is useful for optimization purposes.


Artificial Intelligence | 2010

Outlier detection for simple default theories

Fabrizio Angiulli; Rachel Ben-Eliyahu-Zohary; Luigi Palopoli

It was noted recently that the framework of default logics can be exploited for detecting outliers. Outliers are observations expressed by sets of literals that feature unexpected properties. These observations are not explicitly provided in input (as it happens with abduction) but, rather, they are hidden in the given knowledge base. Unfortunately, in the two related formalisms for specifying defaults - Reiters default logic and extended disjunctive logic programs - the most general outlier detection problems turn out to lie at the third level of the polynomial hierarchy. In this note, we analyze the complexity of outlier detection for two very simple classes of default theories, namely NU and DNU, for which the entailment problem is solvable in polynomial time. We show that, for these classes, checking for the existence of an outlier is anyway intractable. This result contributes to further showing the inherent intractability of outlier detection in default reasoning.


Artificial Intelligence | 2003

Metaqueries: semantics, complexity, and efficient algorithms

Rachel Ben-Eliyahu-Zohary; Ehud Gudes; Giovambattista Ianni

Metaquery (metapattern) is a data mining tool which is useful for learning rules involving more than one relation in the database. The notion of a metaquery has been proposed as a template or a second-order proposition in a language L that describes the type of pattern to be discovered. This tool has already been successfully applied to several real-world applications.In this paper we advance the state of the art in metaquery research in several ways. First, we argue that the notion of a support value for metaqueries, where a support value is intuitively some indication to the relevance of the rules to be discovered, is not adequately defined in the literature, and, hence, propose our own definition. Second, we analyze some of the related computational problems, classify them as NP-hard and point out some tractable cases. Third, we propose some efficient algorithms for computing support and present preliminary experimental results that indicate the usefulness of our algorithms.


Journal of Logic Programming | 2000

More on tractable disjunctive Datalog

Rachel Ben-Eliyahu-Zohary; Luigi Palopoli; Victoria Zemlyanker

Abstract Sometimes it is more natural to express knowledge in disjunctive Datalog rather than in ordinary Datalog. Several highly complex variants of disjunctive Datalog have been proposed in the past and their expressive power has been studied. In this paper we investigate tractable fragments of disjunctive Datalog. Algorithms are presented to answer queries defined using these fragments and their complexity analyzed. Furthermore, the expressive power of these tractable subsets is studied. The most expressive of the languages considered here is shown to express, in some sense explained in the paper, all polynomial time queries. This is the first identified fragment of disjunctive Datalog with this property.


Annals of Mathematics and Artificial Intelligence | 1999

Similarity preservation in default logic

Rachel Ben-Eliyahu-Zohary; Nissim Francez; Michael Kaminski

The paper identifies a problem in default reasoning in Reiter’s Default Logic and related systems: elements which are similar given the axioms only, become distinguishable in extensions. We explain why, sometimes, this is considered undesirable. Two approaches are presented for guaranteeing similarity preservation: One approach formalizes a way of uniformly applying the defaults to all similar elements by introducing generic extensions, which depend only on similarity types of objects. According to the second approach, for a restricted class of default theories, a default theory is viewed as a “shorthand notation” to what is “really meant” by its formulation. In this approach we propose a rewriting of defaults in a form that guarantees similarity preservation of the modified theory. It turns out that the above two approaches yield the same result.


international conference on logic programming | 2001

Algorithms for Computing X-Minimal Models

Chen Avin; Rachel Ben-Eliyahu-Zohary

The problem of computing X-minimal models, that is, models minimal with respect to a subset X of all the atoms in a theory, is very relevant for computing circumscriptions and diagnosis. Unfortunately, the problem is NP-hard. In this paper we present two novel algorithms for computing X-minimal models. The advantage of these new algorithms is that, unlike existing ones, they are capable of generating the models one by one. There is no need to compute a superset of all minimal models before finding the first X-minimal one. Our procedures may use local serach techniques, or, alternatively, complete methods. We have implemented and tested the algorithms and the preliminary experimental results are encouraging.


international conference on logic programming | 2017

Modular Construction of Minimal Models

Rachel Ben-Eliyahu-Zohary; Fabrizio Angiulli; Fabio Fassetti; Luigi Palopoli

We show that minimal models of positive propositional theories can be decomposed based on the structure of the dependency graph of the theories. This observation can be useful for many applications involving computation with minimal models. As an example of such benefits, we introduce new algorithms for minimal model finding and checking that are based on model decomposition. The algorithms’ temporal worst-case complexity is exponential in the size s of the largest connected component of the dependency graph, but their actual cost depends on the size of the largest source actually encountered, which can be far smaller than s, and on the class of theories to which sources belong. Indeed, if all sources reduce to an HCF or HEF theory, the algorithms are polynomial in the size of the theory.

Collaboration


Dive into the Rachel Ben-Eliyahu-Zohary's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luigi Palopoli

University of California

View shared research outputs
Top Co-Authors

Avatar

Ehud Gudes

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Solomon Eyal Shimony

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Luigi Palopoli

University of California

View shared research outputs
Top Co-Authors

Avatar

Amnon Meisels

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Carmel Domshlak

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Chen Avin

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge