Marco Schaerf
Sapienza University of Rome
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marco Schaerf.
international conference on web services | 2004
Gwen Salaün; Lucas Bordeaux; Marco Schaerf
We argue that essential facets of Web services, and especially those useful to understand their interaction, can be described using process-algebraic notations. Web service description and execution languages such as BPEL are essentially process description languages; they are based on primitives for behaviour description and message exchange which can also be found in more abstract process algebras. One legitimate question is therefore whether the formal approach and the sophisticated tools introduced for process algebra can be used to improve the effectiveness and the reliability of Web service development. Our investigations suggest a positive answer, and we claim that process algebras provide a very complete and satisfactory assistance to the whole process of Web service development. We show on a case study that readily available tools based on process algebra are effective at verifying that Web services conform to their requirements and respect properties. We advocate their use both at the design stage and for reverse engineering issues. More prospectively, we discuss how they can be helpful to tackle choreography issues.
Artificial Intelligence | 1995
Marco Schaerf; Marco Cadoli
Abstract Problems in logic are well known to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantically well-founded logic for approximate reasoning, which is justifiable from the intuitive point of view, and to provide fast algorithms for dealing with it even when using expressive languages. We also want our logic to be useful to perform approximate reasoning in different contexts. We define a method for the approximation of decision reasoning problems based on multivalued logics. Our work expands and generalizes, in several directions, ideas presented by other researchers. The major features of our technique are: (1) approximate answers give semantically clear information about the problem at hand; (2) approximate answers are easier to compute than answers to the original problem; (3) approximate answers can be improved, and eventually they converge to the right answer; (4) both sound approximations and complete ones are described. The method we propose is flexible enough to be applied to a wide range of reasoning problems. In our research we considered approximation of several decidable problems with different worstcase complexity, involving both propositional and first-order languages. In particular we defined approximation techniques for: propositional logic, fragments of first-order logic (concept description languages) and modal logic. In our research we also addressed the issue of representing the knowledge of a reasoner with limited resources and how to use such a knowledge for approximate reasoning purposes.
Journal of Logic Programming | 1993
Marco Cadoli; Marco Schaerf
Abstract This paper surveys the main results appearing in the literature on the computational complexity of non-monotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of easy/hard cases. We focus mainly on non-monotonic formalisms, like default logic, autoepistemic logic, circumscription, closed-world reasoning, and abduction, whose relations with logic programming are clear and well studied. Complexity as well as recursion-theoretic results are surveyed.
Journal of Automated Reasoning | 2002
Marco Cadoli; Marco Schaerf; Andrea Giovanardi; Massimo Giovanardi
The high computational complexity of advanced reasoning tasks such as reasoning about knowledge and planning calls for efficient and reliable algorithms for reasoning problems harder than NP. In this paper we propose Evaluate, an algorithm for evaluating quantified Boolean formulae (QBFs). Algorithms for evaluation of QBFs are suitable for experimental analysis of problems that belong to a wide range of complexity classes, a property not easily found in other formalisms. Evaluate is a generalization of the Davis–Putnam procedure for SAT and is guaranteed to work in polynomial space. Before presenting the algorithm, we discuss several abstract properties of QBFs that we singled out to make it more efficient. We also discuss various options that were investigated about heuristics and data structures and report the main results of the experimental analysis. In particular, Evaluate is orders of magnitude more efficient than a nested backtracking procedure that resorts to a Davis–Putnam algorithm for handling the innermost set of quantifiers. Moreover, experiments show that randomly generated QBFs exhibit regular patterns such as phase transition and easy-hard-easy distribution.
Artificial Intelligence | 1996
Marco Cadoli; Francesco M. Donini; Marco Schaerf
Several studies about complexity of NMR showed that inferring in non-monotonic knowledge bases is significantly harder than reasoning in monotonic ones. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harder. In this paper we show that, to some extent, NMR has fulfilled its goal. In particular we prove that circumscription allows for more compact and natural representation of knowledge. Results about intractability of circumscription can therefore be interpreted as the price one has to pay for having such an extra-compact representation. On the other hand, sometimes NMR really makes reasoning simpler; we give prototypical scenarios where closed-world reasoning accounts for a faster and unsound approximation of classical reasoning.
Annals of Mathematics and Artificial Intelligence | 1996
Marco Cadoli; Marco Schaerf
Multivalued logics have a long tradition in the philosophy and logic literature that originates from the work by Łukaszewicz in the 1920s. More recently, many AI researchers have been interested in this topic for both semantic and computational reasons. Multivalued logics have indeed been frequently used both for their semantic properties and as tools for designing tractable reasoning systems. We focus here on the computational aspects of multivalued logics. The main result of this paper is a detailed picture of the impact that the semantic definition, the synthactic form and the assumptions on the relative sizes of the inputs have on the complexity of entailment checking. In particular we show new polynomial cases and generalize polynomial cases already known in the literature for various popular multivalued logics. Such polynomial cases are obtained by means of two simple algorithms, sharing a common method.
International Journal of Business Process Integration and Management | 2006
Gwen Salaün; Lucas Bordeaux; Marco Schaerf
We argue that essential facets of Web services, and especially those useful to understand their interaction, can be described using process-algebraic notations. Web service description and execution languages such as BPEL are essentially process description languages; they are based on primitives for behaviour description and message exchange which can also be found in more abstract process algebras. One legitimate question is therefore whether the formal approach and the sophisticated tools introduced for process algebra can be used to improve the effectiveness and the reliability of Web service development. Our investigations suggest a positive answer, and we claim that process algebras provide a very complete and satisfactory assistance to the whole process of Web service development. We show on a case study that readily available tools based on process algebra are effective at verifying that Web services conform to their requirements and respect properties. We advocate their use both at the design stage and for reverse engineering issues. More prospectively, we discuss how they can be helpful to tackle choreography issues.
Theoretical Computer Science | 1997
Marco Cadoli; Francesco M. Donini; Marco Schaerf; Riccardo Silvestri
We prove that — unless the polynomial hierarchy collapses at the second level — the size of a purely propositional representation of the circumscription CIRC(T) of a propositional formula T grows faster than any polynomial as the size of T increases. We then analyze the significance of this result in the related field of closed-world reasoning.
Journal of Artificial Intelligence Research | 2000
Marco Cadoli; Francesco M. Donini; Paolo Liberatore; Marco Schaerf
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge α, is the size of the shortest formula of F that represents α. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
Artificial Intelligence | 1999
Marco Cadoli; Francesco M. Donini; Paolo Liberatore; Marco Schaerf
Abstract In this paper we address a specific computational aspect of belief revision: the size of the propositional formula obtained by means of the revision of a formula with a new one. In particular, we focus on the size of the smallest formula which is logically equivalent to the revised knowledge base. The main result of this paper is that not all formalizations of belief revision are equal from this point of view. For some of them we show that the revised knowledge base can be represented by a polynomial-size formula (we call these results “compactability” results). On the other hand, for other ones the revised knowledge base does not always admit a polynomial-space representation, unless the polynomial hierarchy collapses at a sufficiently low level (“non-compactability” results). We also show that the time complexity of query answering for the revised knowledge base has definitely an impact on being able to represent the result of the revision compactly. Nevertheless, formalisms with the same complexity may have different compactability properties. We also study compactability properties for a weaker form of equivalence, called query equivalence, which allows to introduce new propositional symbols. Moreover, we extend our analysis to the special case in which the new formula has constant size and to the case of sequences of revisions (i.e., iterated belief revision). A complete analysis along these four coordinates is shown.