Wei-Ngan Chin
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wei-Ngan Chin.
verification model checking and abstract interpretation | 2007
Huu Hai Nguyen; Cristina David; Shengchao Qin; Wei-Ngan Chin
Despite their popularity and importance, pointer-based programs remain a major challenge for program verification. In this paper, we propose an automated verification system that is concise, precise and expressive for ensuring the safety of pointer-based programs. Our approach uses user-definable shape predicates to allow programmers to describe a wide range of data structures with their associated size properties. To support automatic verification, we design a new entailment checking procedure that can handle well-founded inductive predicates using unfold/fold reasoning. We have proven the soundness and termination of our verification system, and have built a prototype system.
Science of Computer Programming | 2012
Wei-Ngan Chin; Cristina David; Huu Hai Nguyen; Shengchao Qin
In recent years, separation logic has emerged as a contender for formal reasoning of heap-manipulating imperative programs. Recent works have focused on specialised provers that are mostly based on fixed sets of predicates. To improve expressivity, we have proposed a prover that can automatically handle user-defined predicates. These shape predicates allow programmers to describe a wide range of data structures with their associated size properties. In the current work, we shall enhance this prover by providing support for a new type of constraints, namely bag (multi-set) constraints. With this extension, we can capture the reachable nodes (or values) inside a heap predicate as a bag constraint. Consequently, we are able to prove properties about the actual values stored inside a data structure.
Sigplan Notices | 1999
Wei-Ngan Chin; Siau-Cheng Khoo
Many program optimisations and analyses, such as array-bound checking, termination analysis, etc, depend on knowing the size of a functions input and output. However, size information can be difficult to compute. Firstly, accurate size computation requires detecting size relation between different inputs of a function. Secondly, different optimisations and analyses may require slightly different size information, and thus slightly different computation. Literature in size computation has mainly concentrated on size checking, instead of inferencing. In this paper, we provide a generic framework on which different size variants can be expressed and computed. We also describe an effective algorithm for inferring, instead of checking, size information. Size information are expressed in terms of Presburger formulae, and our algorithm utilises the Omega Calculator to compute as exact a size information as possible, within the linear arithmetic capability.
partial evaluation and semantic-based program manipulation | 1993
Wei-Ngan Chin
The tupling transformation strategy can be used to merge loops together by combining recursive calls and also to eliminate redundant calls for a class of programs. The clever (and difficult) step of this transformation strategy is to find an appropriate tuple of calls, called the eureka tuple, which would allow each set of calls to be computed recursively from its previous set. In many cases, this transformation can produce super-linear speedup. In this paper, we present an analysis method which could find eureka tuples for a wide range of functional programs. Our work extends that of a number of past techniques which have used dependency graphs of function calls for analysing redundancy patterns. Our main contribution is the use of appropriate call orderings based on recursion parameters to systematically search for eureka tuples in dependency graphs. They allow us to construct sequences of tuples, called the continuous sequences of tuples, whereby the existence of a matched pair of tuples in the sequence corresponds to a suitable eureka tuple. An extension of the basic analysis method which uses trees, instead of sequences, of tuples will also be presented. The basic and extended analysis methods will be shown to be applicable to a wide range of programs. They can be guaranteed to terminate by either bounding the search or by applying suitable restrictions on the class of applicable programs. The latter approach yields a safe automated procedure for tupling transformation.
static analysis symposium | 2005
Wei-Ngan Chin; Huu Hai Nguyen; Shengchao Qin; Martin C. Rinard
We present a new type system for an object-oriented (OO) language that characterizes the sizes of data structures and the amount of heap memory required to successfully execute methods that operate on these data structures. Key components of this type system include type assertions that use symbolic Presburger arithmetic expressions to capture data structure sizes, the effect of methods on the data structures that they manipulate, and the amount of memory that methods allocate and deallocate. For each method, we conservatively capture the amount of memory required to execute the method as a function of the sizes of the methods inputs. The safety guarantee is that the method will never attempt to use more memory than its type expressions specify. We have implemented a type checker to verify memory usages of OO programs. Our experience is that the type system can precisely and effectively capture memory bounds for a wide range of programs.
symposium on principles of programming languages | 2008
Wei-Ngan Chin; Cristina David; Huu Hai Nguyen; Shengchao Qin
Conventional specifications for object-oriented (OO) programs must adhere to behavioral subtyping in support of class inheritance and method overriding. However, this requirement inherently weakens the specifications of overridden methods in superclasses, leading to imprecision during program reasoning. To address this, we advocate a fresh approach to OO verification that focuses on the distinction and relation between specifications that cater to calls with static dispatching from those for calls with dynamic dispatching. We formulate a novel specification subsumption that can avoid code re-verification, where possible. Using a predicate mechanism, we propose a flexible scheme for supporting class invariant and lossless casting. Our aim is to lay the foundation for a practical verification system that is precise, concise and modular for sequential OO programs. We exploit the separation logic formalism to achieve this.
symposium on principles of programming languages | 1998
Zhenjiang Hu; Masato Takeichi; Wei-Ngan Chin
The problems involved in developing efficient parallel programs have proved harder than those in developing efficient sequential ones, both for programmers and for compilers. Although program calculation has been found to be a promising way to solve these problems in the sequential world, we believe that it needs much more effort to study its effective use in the parallel world. In this paper, we propose a calculational framework for the derivation of efficient parallel programs with two main innovations:. ¿We propose a novel inductive synthesis lemma based on which an elementary but powerful parallelization theorem is developed. ¿We make the first attempt to construct a calculational algorithm for parallelization, deriving associative operators from data type definition and making full use of existing fusion and tupling calculations.Being more constructive, our method is not only helpful in the design of efficient parallel programs in general but also promising in the construction of parallelizing compiler. Several interesting examples are used for illustration.
Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 2001
Wei-Ngan Chin; Siau-Cheng Khoo
Many program optimizations and analyses, such as array-bounds checking, termination analysis, etc., depend on knowing the size of a functions input and output. However, size information can be difficult to compute. Firstly, accurate size computation requires detecting a size relation between different inputs of a function. Secondly, different optimizations and analyses may require slightly different size information, and thus slightly different computation. Literature in size computation has mainly concentrated on size checking, instead of size inference. In this paper, we provide a generic framework on which different size variants can be expressed and computed. We also describe an effective algorithm for inferring, instead of checking, size information. Size information are expressed in terms of Presburger formulae, and our algorithm utilizes the Omega Calculator to compute as exact a size information as possible, within the linear arithmetic capability.
international symposium on memory management | 2008
Wei-Ngan Chin; Huu Hai Nguyen; Corneliu Popeea; Shengchao Qin
Embedded systems are becoming more widely used but these systems are often resource constrained. Programming models for these systems should take into formal consideration resources such as stack and heap. In this paper, we show how memory resource bounds can be inferred for assembly-level programs. Our inference process captures the memory needs of each method in terms of the symbolic values of its parameters. For better precision, we infer path-sensitive information through a novel guarded expression format. Our current proposal relies on a Presburger solver to capture memory requirements symbolically, and to perform fixpoint analysis for loops and recursion. Apart from safety in memory adequacy, our proposal can provide estimate on memory costs for embedded devices and improve performance via fewer runtime checks against memory bound.
ASIAN'06 Proceedings of the 11th Asian computing science conference on Advances in computer science: secure software and related issues | 2006
Corneliu Popeea; Wei-Ngan Chin
Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convex-hull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost.