Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Baudouin Le Charlier is active.

Publication


Featured researches published by Baudouin Le Charlier.


ACM Transactions on Programming Languages and Systems | 1994

Experimental evaluation of a generic abstract interpretation algorithm for PROLOG

Baudouin Le Charlier; Pascal Van Hentenryck

Abstract interpretation of PROLOG programs has attracted many researchers in recent years, partly because of the potential for optimization in PROLOG compilers and partly because of the declarative nature of logic programming languages that make them more amenable to optimization than procedural languages. Most of the work, however, has remained at the theoretical level, focusing on the developments of frameworks and the definition of abstract domains. This paper reports our effort to verify experimentally the practical value of this area of research. It describes the design and implementation of the generic abstract interpretation algorithm GAIA that we originally proposed in Le Charlier et al. [1991], its instantiation to a sophisticated abstract domain (derived from Bruynooghe and Janssens [1988]) containing modes, types, sharing, and aliasing, and its evaluation both in terms of performance and accuracy. The overall implementation (over 5000 lines of Pascal) has been systematically analyzed on a variety of programs and compared with the complexity analysis of Le Charlie et al. [1991] and the specific analysis systems of Hickey and Mudambi [1989], Taylor [1989; 1990], Van Roy and Despain [1990], and Warren et al. [1988].


european symposium on research in computer security | 1992

ASAX: Software Architecture and Rule-Based Language for Universal Audit Trail Analysis

Naji Habra; Baudouin Le Charlier; Abdelaziz Mounji; Isabelle Mathieu

After a brief survey of the problems related to audit trail analysis and of some approaches to deal with them, the paper outlines the project ASAX which aims at providing an advanced tool to support such analysis. One key feature of ASAX is its elegant architecture build on top of a universal analysis tool allowing any audit trail to be analysed after a straight format adaptation. Another key feature of the project ASAX is the language RUSSEL used to express queries on audit trails. RUSSEL is a rulebased language which is tailor-made for the analysis of sequential files in one and only one pass. The conception of RUSSEL makes a good compromise with respect to the needed efficiency on the one hand and to the suitable declarative look on the other hand. The language is illustrated by examples of rules for the detection of some representative classical security breaches.


Journal of Logic Programming | 1995

Type analysis of prolog using type graphs

Pascal Van Hentenryck; Agostino Cortesi; Baudouin Le Charlier

Abstract Type analysis of Prolog is of primary importance for high-performance compilers since type information may lead to better indexing and to sophisticated specializations of unification and built-in predicates, to name a few. However, these optimization often require a sophisticated type inference system capable of inferring disjunctive and recursive types, and hence expensive in computation time. The purpose of this paper is to describe a type analysis system for Prolog based on abstract interpretation and type graphs (i.e., disjunctive rational trees) with this functionality. The system (about 15,000 lines of C) consists of the combination of a generic fixpoint algorithm, a generic pattern domain, and a type graph domain. The main contribution of the paper is to show that this approach can be engineered to be practical for medium-sized programs without sacrificing accuracy. The main technical contribution to achieve this result is a novel widening operator for type graphs which appears to be accurate and effective in keeping the sizes of the graphs, and hence the computation time, reasonably small.


symposium on principles of programming languages | 1994

Combinations of abstract domains for logic programming

Agostino Cortesi; Baudouin Le Charlier; Pascal Van Hentenryck

Abstract interpretation [7] is a systematic methodology to designstatic program analysis which has been studied extensively in the logicprogramming community, because of the potential for optimizations inlogic programming compilers and the sophistication of the analyses whichrequire conceptual support. With the emergence of efficient genericabstract interpretation algorithms for logic programming, the mainburden in building an analysis is the abstract domain which gives a safeapproximation of the concrete domain of computation. However, accurateabstract domains for logic programming are often complex because of thevariety of analyses to perform their interdependence, and the need tomaintain structural information. The purpose of this paper is to proposeconceptual and software support for the design of abstract domains. Itcontains two main contributions: the notion of open product and ageneric pattern domain. The <?Pub Fmt italic>openproduct<?Pub Fmt /italic> is a new way of combining abstract domainsallowing each combined domain to benefit from information from the othercomponents through the notions of queries and open operations. The openproduct is general-purpose and can be used for other programmingparadigms as well. <?Pub Fmt italic>The generic patterndomain<?Pub Fmt /italic> Pat (<inline-equation><f><ge>R</ge></f> </inline-equation>)automatically upgrades a domain D with structuralinformation yielding a more accurate domain Pat (D) without additionaldesign or implementation cost. The two contributions are orthogonal andcan be combined in various ways to obtain sophisticated domains whileimposing minimal requirements on the designer. Both contributions arecharacterized theoretically and experimentally and were used to designvery complex abstract domains such as PAT(OProp<inline-equation><f>⊗</f></inline-equation>OMode<inline-equation><f>⊗</f><?Pub Caret></inline-equation>OPS) which would be very difficult todesign otherwise. On this last domain, designers need only contributeabout 20% (about 3,400 lines) of the complete system (about 17,700lines).


Journal of Logic Programming | 1995

Evaluation of the domain PROP

Pascal Van Hentenryck; Agostino Cortesi; Baudouin Le Charlier

Abstract The domain Prop [11, 30] is a conceptually simple and elegant abstract domain to compute groundness information for Prolog programs, where abstract substitutions are represented by Boolean functions. Prop has raised much theoretical interest recently, but little is known about the practical accuracy and efficiency of this domain. Experimental evaluation of Prop is particularly important since Prop theoretically needs to solve a co-NP-Complete problem. However, this complexity issue may not matter much in practice because the size of the abstract substitutions is bounded since Prop would only work on the clause variables in many frameworks. The purpose of this paper is to study the performance of domain Prop . Its first contribution is to describe an implementation of the domain Prop and to use it to instantiate a generic abstract interpretation algorithm [17, 23, 27]. A key feature of the implementation is the use of ordered binary decision graphs to provide a compact representation of many Boolean functions. Its second contribution is to describe the design and implementation of a new domain, Pat(Prop) , combining the domain Prop with structural information about the subterms. This new domain may significantly improve the accuracy of the domain Prop on programs manipulating difference-lists. Both implementations (resp. 6000 and 12,000 lines of C) have been evaluated systematically, and their efficiency and accuracy for groundness inference have been compared with several other abstract domains. The interest of Pat(Prop) and Prop for on-line analysis is also investigated.


european conference on object oriented programming | 2001

Distinctness and Sharing Domains for Static Analysis of Java Programs

Isabelle Pollet; Baudouin Le Charlier; Agostino Cortesi

The application field of static analysis techniques for objectoriented programming is getting broader, ranging from compiler optimizations to security issues. This leads to the need of methodologies that support reusability not only at the code level but also at higher (semantic) levels, in order to minimize the effort of proving correctness of the analyses. Abstract interpretation may be the most appropriate approach in that respect. This paper is a contribution towards the design of a general framework for abstract interpretation of Java programs. We introduce two generic abstract domains that express type, structural, and sharing information about dynamically created objects. These generic domains can be instantiated to get specific analyses either for optimization or verification issues. The semantics of the domains are precisely defined by means of concretization functions based on mappings between concrete and abstract locations. The main abstract operations, i.e., upper bound and assignment, are discussed. An application of the domains to source-to-source program specialization is sketched to illustrate the effectiveness of the analysis.


formal methods | 1998

Specifications are necessarily informal or: some more myths of formal methods

Baudouin Le Charlier; Pierre Flener

Abstract We reconsider the concept of specification in order to bring new insights into the debate of formal versus non-formal methods in computer science. In our view, the correctness of a useful program corresponds to an objective fact, which must have a simple, precise, and understandable formulation. As a consequence, a specification can (and must) only make precise the link existing between the program (formality) and its purpose (informality). Moreover, program correctness can be argued only by means of non-formal reasonings, which should be as explicit as possible. This allows us to explain why specifications cannot be written in a strictly formal language. Our view of specifications does not imply a rejection of all ideas put forward in the literature on formal methods. On the contrary, we agree with the proponents of formal methods on most of their arguments, except on those following from the assumption that specifications could (or should) be formal. Finally, we examine why the role and nature of specifications are so often misunderstood.


Software - Practice and Experience | 1993

Generic abstract interpretation algorithms for Prolog: two optimization techniques and their experimental evaluation

Vincent Englebert; Baudouin Le Charlier; Didier Roland; Pascal Van Hentenryck

The efficient implementation of generic abstract interpretation algorithms for Prolog is reconsidered after References 1 and 2. Two new optimization techniques are proposed and applied to the original algorithm of Reference 1: dependency on clause prefixes and caching of operations. The first improvement avoids re‐evaluating a clause prefix when no abstract value which it depends on has been updated. The second improvement consists of caching all operations on substitutions and reusing the results whenever possible. The algorithm and the two optimization techniques have been implemented in C (about 8000 lines of code each), tested on a large number of Prolog programs, and compared with the original implementation on an abstract domain containing modes, types and sharing. In conjunction with refinements of the domain algorithms, they produce an average reduction of more than 58 per cent is computation time. Extensive experimental results on the programs are given, including computation times, memory consumption, hit ratios for the caches, the number of operations performed, and the time distribution. As a main result, the improved algorithms exhibit the same efficiency as the specific tools of References 3 and 4, despite the fact that our abstract domain is more sophisticated and accurate. The abstract operations also take 90 per cent of the computation time, indicating that the overhead of the control is very limited. Results on a simpler domain are also given and show that even extremely basic domains can benefit from the optimizations. The general‐purpose character of the optimizations is also discussed.


partial evaluation and semantic-based program manipulation | 1993

Groundness analysis for Prolog: implementation and evaluation of domain prop

Baudouin Le Charlier; Pascal Van Hentenryck

The domain <?Pub Fmt italic>Prop<?Pub Fmt /italic> [22,8] is aconceptually simple and elegant abstract domain to compute groundnessinformation for Prolog programs. In particular, abstract substitutionsare represented by Boolean functions built using the logical connectives<inline-equation><f>⇔,∨,∧</f><?Pub Caret></inline-equation>.<?Pub Fmt italic>Prop<?Pub Fmt /italic> has raised much theoreticalinterest recently but little is known about the practical accuracy andefficiency of this domain. In this paper, we describe an implementation of<?Pub Fmt italic>Prop<?Pub Fmt /italic> and we use it to instantiate ageneric abstract interpretation algorithm [14, 10, 17, 15]. A keyfeature of the implementation is the use of ordered binary decision graphs. The implementation has been compared systematically to two otherabstract domains, <?Pub Fmt italic>Mode<?Pub Fmt /italic> and <?Pub Fmt italic>Pattern<?Pub Fmt /italic>, from the point of view ofgroundness analysis. The experimental results indicate that(1)<?Pub Fmt italic>Prop<?Pub Fmt /italic> is very accurate to infergroundness information; (2) this domain is quite practical in terms ofefficiency, although it is theoretically exponential (in the number ofclause variables).


WSA '93 Proceedings of the Third International Workshop on Static Analysis | 1993

The Impact of Granularity in Abstract Interpretation of Prolog

Pascal Van Hentenryck; Olivier Degimbe; Baudouin Le Charlier; Laurent Michel

Abstract interpretation of Prolog has received much attention in recent years leading to the development of many frameworks and algorithms. One reason for this proliferation comes from the fact that program analyses can be defined at various granularities, achieving a different trade-off between efficiency and precision. The purpose of this paper is to study this tradeoff experimentally. We review the most frequently proposed granularities which can be expressed as a two dimensional space parametrized by the form of the inputs and outputs. The resulting algorithms are evaluated on three abstract domains with very different functionalities, Mode, Prop, and Pattern to assess the impact of granularity on efficiency and accuracy. This is, to our knowledge, the first study of granularity at the algorithm level and some of the results are particularly surprising.

Collaboration


Dive into the Baudouin Le Charlier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Agostino Cortesi

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar

Sabina Rossi

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Michel

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francçois Gobert

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Gustavo A. Ospina

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge