Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where François Lévy is active.

Publication


Featured researches published by François Lévy.


systems man and cybernetics | 2004

Conflicts versus analytical redundancy relations: a comparative analysis of the model based diagnosis approach from the artificial intelligence and automatic control perspectives

Marie-Odile Cordier; Philippe Dague; François Lévy; Jacky Montmain; M. Staroswiecki; Louise Travé-Massuyès

Two distinct and parallel research communities have been working along the lines of the model-based diagnosis approach: the fault detection and isolation (FDI) community and the diagnostic (DX) community that have evolved in the fields of automatic control and artificial intelligence, respectively. This paper clarifies and links the concepts and assumptions that underlie the FDI analytical redundancy approach and the DX consistency-based logical approach. A formal framework is proposed in order to compare the two approaches and the theoretical proof of their equivalence together with the necessary and sufficient conditions is provided.


IFAC Proceedings Volumes | 2000

AI and Automatic Control Approaches of Model-Based Diagnosis: Links and Underlying Hypotheses

M-O. Cordier; Philippe Dague; Michel Dumas; François Lévy; Jacky Montmain; M. Staroswiecki; Louise Travé-Massuyès

Abstract Two distinct communities have been working along the Model-Based Diagnosis approach. This paper clarifies and links the concepts and hypotheses that underly the FDI analytical redundancy approach and the DX consistency-based logical approach. This work results from the collaboration existing within the French IMALAIA group supported by the French National Programs on Automatic Control GDR Automatique and on Artificial Intelligence GDR 13.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 1991

Computing Extensions of Default Theories

François Lévy

We have presented a theoretical characterisation of sets of defaults generating an extension of a default theory: complete and regular universes. We have then defined the concept of valid exclusion, and have given with the help of that concept another characterisation of complete and regular universes. This last result allows us an encoding of a default theory into a set of clauses involving typed literals. Resolution followed by some deletion algorithms then computes every extension of the default theory. Contrary to previously published papers, this method works for any propositional default theory.


international conference on tools with artificial intelligence | 2010

An Environment for the Joint Management of Written Policies and Business Rules

François Lévy; Abdoulaye Guissé; Adeline Nazarenko; Nouha Omrane; Sylvie Szulman

The contemporary world produces huge bodies of policies and regulations, while the underlying procedures tend to be automated in decision systems, which are designed to define, deploy, execute, monitor and maintain the various rules to which an organization or enterprise has to comply. It is important that the written documentation is integrated into such decision systems in order to refer to the texts to explain decisions, to update the systems when the policy evolves or, conversely, to amend the source documents if some of the rules happen to be inconsistent. The problem is that the complexity of information to be searched for is not reachable by an automated processing, but that their volume prohibits a manual one. Arguing that the integration of policies in decision systems can be better achieved through a semantic annotation than by the full parsing of the source documentation, this paper presents a technical environment that enables the building and exploitation of such semantic annotations.


Archive | 2013

Theory, Practice, and Applications of Rules on the Web

Leora Morgenstern; Petros S. Stefaneas; François Lévy; Adam Z. Wyner; Adrian Paschke

We present textual logic (TL), a novel approach that enables rapid semi-automatic acquisition of rich logical knowledge from text. The resulting axioms are expressed as defeasible higher-order logic formulas in Rulelog, a novel extended form of declarative logic programs. A key element of TL is textual terminology, a phrasal style of knowledge in which words/word-senses are used directly as logical constants. Another key element of TL is a method for rapid interactive disambiguation as part of logic-based text interpretation. Existential quantifiers are frequently required, and we describe Rulelog’s approach to making existential knowledge be defeasible. We describe results from a pilot experiment that represented the knowledge from several thousand English sentences in the domain of college-level cell biology, for purposes of question-answering.


International Journal of Intelligent Systems | 1994

A survey of belief revision and updating in classical logic

François Lévy

This paper summarizes and analyzes the technical aspects of some significant approaches to belief change where beliefs are expressed in classical logic. First, proposals which have played an important role in the elaboration of the problem are considered. Then, we turn to the analysis of various forms of preference relations which are central for the understanding of the difficulties of the problem.


rules and rule markup languages for the semantic web | 2013

Formalization of natural language regulations through SBVR structured english

François Lévy; Adeline Nazarenko

This paper presents an original use of SBVR to help building a set of business rules out of regulatory documents. The formalization is analyzed as a three-step process, in which SBVR-SE stands in an intermediate position between the Natural Language on the one hand and the formal language on the other hand. The rules are extracted, clarified and simplified at the general regulatory level (expert task) before being refined according to the business application (engineer task). A methodology for these first two steps is described, with different operations composing each step. It is illustrated with examples from the literature and from the Ontorule use cases.


industrial and engineering applications of artificial intelligence and expert systems | 2000

A constraint-based approach to simulate faults in telecommunication networks

Aomar Osmani; François Lévy

To study the consequences of fault situations in telecommunication management networks (TMN), we have proposed a model-based diagnosis approach based on the CSP paradigm. First the TMN is modeled, then a set of breakdown situations is simulated in order to build a fault-training base. Finally, a rule-based system is generated and used to detect faults in the real system. In this work, we are interested in the simulation process. The fault simulation comes down to the propagation in the network, of the information emitted by the breakdown components. To take into account the inaccuracies of the transmissions in the network and the times of events processing, we represent temporal knowledge by time intervals. The components are modeled by extended transducers. Consequently, the behavior of the components is sensitive to the order of happened events that have occurred. To reason about these behaviors, a CSP formalism is studied. The main aim is not to compute a possible behavior of a component but to compute all the behaviors making it possible to build a faults-training database. Some polynomial algorithms are proposed to computes solutions and to update temporal constraints.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 1993

Weak Extensions for Default Theories

François Lévy

We propose a notion of weak extension of a default theory, motivated by a kind of paraconsistent view of default reasoning, and which coincides with Reiters extensions when they exist. When the default theory is inconsistent, in the sense that it has no extension, we determine defaults which are not involved in the inconsistency, preserving the possibility to draw some default conclusions. Moreover, any finite default theory has weak extensions, though the question remains open for infinite default theories. Weak extensions correspond to some consistent labelings of the defaults, according to a notion of consistency close to the standard one in the TMS, so computing weak extensions may adapt standard methods of the TMS. We also compare weak extensions to propositions based on a maximal consistent set of defaults, and show that maximal consistency generate far more extensions. Finally, we briefly sketch how our results can be extended to Autoepistemic Logic and to Logic Programming.


international conference on legal knowledge and information systems | 2017

On annotation of the textual contents of Scottish legal instruments

Adam Z. Wyner; Fraser Gough; François Lévy; Matt Lynch; Adeline Nazarenko

We thank the funding from the University of Aberdeen’s Impact, Knowledge Exchange, and Commercialisation Award for this 10 week study. This work was also supported by the French National Research Agency (ANR-10-LABX-0083) in the context of the Labex EFL. We also thank the student staff: A. Andonov, A. Faulds, E. Onwa, L. Schelling, R. Stoyanov, and O. Toloch.

Collaboration


Dive into the François Lévy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge