Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sophie Tourret is active.

Publication


Featured researches published by Sophie Tourret.


international joint conference on automated reasoning | 2014

A Rewriting Strategy to Generate Prime Implicates in Equational Logic

Mnacho Echenim; Nicolas Peltier; Sophie Tourret

Generating the prime implicates of a formula consists in finding its most general consequences. This has many fields of application in automated reasoning, like planning and diagnosis, and although the subject has been extensively studied (and still is) in propositional logic, very few have approached the problem in more expressive logics because of its intrinsic complexity. This paper presents one such approach for flat ground equational logic. Aiming at efficiency, it intertwines an existing method to generate all prime implicates of a formula with a rewriting technique that uses atomic equations to simplify the problem by removing constants during the search. The soundness, completeness and termination of the algorithm are proven. The algorithm has been implemented and an experimental analysis is provided.


inductive logic programming | 2017

Inductive Learning from State Transitions over Continuous Domains

Tony Ribeiro; Sophie Tourret; Maxime Folschette; Morgan Magnin; Domenico Borzacchiello; Francisco Chinesta; Olivier F. Roux; Katsumi Inoue

Learning from interpretation transition (LFIT) automatically constructs a model of the dynamics of a system from the observation of its state transitions. So far, the systems that LFIT handles are restricted to discrete variables or suppose a discretization of continuous data. However, when working with real data, the discretization choices are critical for the quality of the model learned by LFIT. In this paper, we focus on a method that learns the dynamics of the system directly from continuous time-series data. For this purpose, we propose a modelling of continuous dynamics by logic programs composed of rules whose conditions and conclusions represent continuums of values.


Journal of Artificial Intelligence Research | 2017

Prime Implicate Generation in Equational Logic

Mnacho Echenim; Nicolas Peltier; Sophie Tourret

We present an algorithm for the generation of prime implicates in equational logic, that is, of the most general consequences of formulae containing equations and disequations between first-order terms. This algorithm is defined by a calculus that is proved to be correct and complete. We then focus on the case where the considered clause set is ground, i.e., contains no variables, and devise a specialized tree data structure that is designed to efficiently detect and delete redundant implicates. The corresponding algorithms are presented along with their termination and correctness proofs. Finally, an experimental evaluation of this prime implicate generation method is conducted in the ground case, including a comparison with state-of-the-art propositional and first-order prime implicate generation tools.


conference on automated deduction | 2015

Quantifier-Free Equational Logic and Prime Implicate Generation

Mnacho Echenim; Nicolas Peltier; Sophie Tourret

An algorithm for generating prime implicates of sets of equational ground clauses is presented. It consists in extending the standard Superposition Calculus with rules that allow attaching hypotheses to clauses to perform additional inferences. The hypotheses that lead to a refutation represent implicates of the original set of clauses. The set of prime implicates of a clausal set can thus be obtained by saturation of this set. Data structures and algorithms are also devised to represent sets of constrained clauses in an efficient and concise way.


international joint conference on artificial intelligence | 2018

Prime Implicate Generation in Equational Logic (extended abstract)

Mnacho Echenim; Nicolas Peltier; Sophie Tourret

A procedure is proposed to efficiently generate sets of ground implicates of first-order formulas with equality. It is based on a tuning of the superposition calculus [Nieuwenhuis and Rubio, 2001], enriched with rules that add new hypotheses on demand during the proof search. Experimental results are presented , showing that the proposed approach is more efficient than state-of-the-art systems.


inductive logic programming | 2018

Derivation Reduction of Metarules in Meta-interpretive Learning.

Andrew Cropper; Sophie Tourret

Meta-interpretive learning (MIL) is a form of inductive logic programming. MIL uses second-order Horn clauses, called metarules, as a form of declarative bias. Metarules define the structures of learnable programs and thus the hypothesis space. Deciding which metarules to use is a trade-off between efficiency and expressivity. The hypothesis space increases given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. A recent paper used Progol’s entailment reduction algorithm to identify irreducible, or minimal, sets of metarules. In some cases, as few as two metarules were shown to be sufficient to entail all hypotheses in an infinite language. Moreover, it was shown that compared to non-minimal sets, learning with minimal sets of metarules improves predictive accuracies and lowers learning times. In this paper, we show that entailment reduction can be too strong and can remove metarules necessary to make a hypothesis more specific. We describe a new reduction technique based on derivations. Specifically, we introduce the derivation reduction problem, the problem of finding a finite subset of a Horn theory from which the whole theory can be derived using SLD-resolution. We describe a derivation reduction algorithm which we use to reduce sets of metarules. We also theoretically study whether certain sets of metarules can be derivationally reduced to minimal finite subsets. Our experiments compare learning with entailment and derivation reduced sets of metarules. In general, using derivation reduced sets of metarules outperforms using entailment reduced sets of metarules, both in terms of predictive accuracies and learning times.


international joint conference on artificial intelligence | 2013

An approach to abductive reasoning in equational logic

Mnacho Echenim; Nicolas Peltier; Sophie Tourret


ILP (Short Papers) | 2016

Learning from Interpretation Transition using Feed-Forward Neural Networks.

Enguerrand Gentet; Sophie Tourret; Katsumi Inoue


ILP (Late Breaking Papers) | 2017

Learning Logic Program Representation for Delayed Systems With Limited Training Data.

Yin Jun Phua; Tony Ribeiro; Sophie Tourret; Katsumi Inoue


PAAR@IJCAR | 2014

A Deductive-Complete Constrained Superposition Calculus for Ground Flat Equational Clauses.

Sophie Tourret; Mnacho Echenim; Nicolas Peltier

Collaboration


Dive into the Sophie Tourret's collaboration.

Top Co-Authors

Avatar

Mnacho Echenim

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nicolas Peltier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Katsumi Inoue

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Tony Ribeiro

Graduate University for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Morgan Magnin

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Enguerrand Gentet

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier F. Roux

Institut de Recherche en Communications et Cybernétique de Nantes

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge