William D. Clinger
Northeastern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William D. Clinger.
programming language design and implementation | 1998
William D. Clinger
The IEEE/ANSI standard for Scheme requires implementations to be properly tail recursive. This ensures that portable code can rely upon the space efficiency of continuation-passing style and other idioms. On its face, proper tail recursion concerns the efficiency of procedure calls that occur within a tail context. When examined closely, proper tail recursion also depends upon the fact that garbage collection can be asymptotically more space-efficient than Algol-like stack allocation.Proper tail recursion is not the same as ad hoc tail call optimization in stack-based languages. Proper tail recursion often precludes stack allocation of variables, but yields a well-defined asymptotic space complexity that can be relied upon by portable programs.This paper offers a formal and implementation-independent definition of proper tail recursion for Scheme. It also shows how an entire family of reference implementations can be used to characterize related safe-for-space properties, and proves the asymptotic inequalities that hold between them.
programming language design and implementation | 1990
William D. Clinger
Converting decimal scientific notation into binary floating point is nontrivial, but this conversion can be performed with the best possible accuracy without sacrificing efficiency.Consider the problem of converting decimal scientific notation for a number into the best binary floating point approximation to that number, for some fixed precision. This problem cannot be solved using arithmetic of any fixed precision. Hence the IEEE Standard for Binary Floating-Point Arithmetic does not require the result of such a conversion to be the best approximation.This paper presents an efficient algorithm that always finds the best approximation. The algorithm uses a few extra bits of precision to compute an IEEE-conforming approximation while testing an intermediate result to determine whether the approximation could be other than the best. If the approximation might not be the best, then the best approximation is determined by a few simple operations on multiple-precision integers, where the precision is determined by the input. When using 64 bits of precision to compute IEEE double precision results, the algorithm avoids higher-precision arithmetic over 99% of the time.The input problem considered by this paper is the inverse of an output problem considered by Steele and White: Given a binary floating point number, print a correctly rounded decimal representation of it using the smallest number of digits that will allow the number to be read without loss of accuracy. The Steele and White algorithm assumes that the input problem is solved; an imperfect solution to the input problem, as allowed by the IEEE standard and ubiquitous in current practice, defeats the purpose of their algorithm.
programming language design and implementation | 1997
William D. Clinger; Lars Thomas Hansen
If a fixed exponentially decreasing probability distribution function is used to model every objects lifetime, then the age of an object gives no information about its future life expectancy. This radioactive decay model implies there can be no rational basis for deciding which live objects should be promoted to another generation. Yet there remains a rational basis for deciding how many objects to promote, when to collect garbage, and which generations to collect.Analysis of the model leads to a new kind of generational garbage collector whose effectiveness does not depend upon heuristics that predict which objects will live longer than others.This result provides insight into the computational advantages of generational garbage collection, with implications for the management of objects whose life expectancies are difficult to predict.
Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 1999
William D. Clinger; Anne Hartheimer; Eric M. Ost
Scheme and Smalltalk continuations may have unlimited extent. This means that a purely stack-based implementation of continuations, as suffices for most languages, is inadequate. We review several implementation strategies for continuations and compare their performance using instruction counts for the normal case and continuation-intensive synthetic benchmarks for other scenarios, including coroutines and multitasking. All of the strategies constrain a compiler in some way, resulting in indirect costs that are hard to measure directly. We use related measurements on a set of benchmarks to calculate upper bounds for these indirect costs.
Journal of Functional Programming | 2001
Mitchell Wand; William D. Clinger
Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multi-pass optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize and prove the soundness of these analyses using small-step operational semantics. We also prove that any sound liveness analysis induces a correct program transformation.
international conference on functional programming | 2002
Lars Thomas Hansen; William D. Clinger
Generational collection has improved the efficiency of garbage collection in fast-allocating programs by focusing on collecting young garbage, but has done little to reduce the cost of collecting a heap containing large amounts of older data. A new generational technique, older-first collection, shows promise in its ability to manage older data.This paper reports on an implementation study that compared two older-first collectors to traditional (younger-first) generational collectors. One of the older-first collectors performed well and was often effective at reducing the first-order cost of collection relative to younger-first collectors. Older-first collectors perform especially well when objects have queue-like or random lifetimes.
Celebrating the 50th Anniversary of Lisp on | 2008
William D. Clinger
Scheme began as a sequential implementation of the Actor model, from which Scheme acquired its proper tail recursion and first class continuations; other consequences of its origins include lexical scoping, first class procedures, uniform evaluation, and a unified environment. As Scheme developed, it spun off important new technical ideas such as delimited continuations and hygienic macros while enabling research in compilers, semantics, partial evaluation, and other areas. Dozens of implementations support a wide variety of users and aspirations, exerting pressure on the processes used to specify Scheme.
Science of Computer Programming | 2006
William D. Clinger; Fabio V. Rojas
A programs distribution of object lifetimes is one of the factors that determines whether and how much it will benefit from generational garbage collection, and from what kind of generational collector. Linear combinations of radioactive decay models appear adequate for modelling object lifetimes in many programs, especially when the goal is to analyze the relative or theoretical performance of simple generational collectors.The boundary between models that favor younger-first generational collectors and models that favor older-first generational collectors is mathematically complex, even for highly idealized collectors. For linear combinations of radioactive decay models, non-generational collection is rarely competitive with idealized generational collection, even at that boundary.
dynamic languages symposium | 2011
Felix S. Klock; William D. Clinger
Regional garbage collection is scalable, with theoretical worst-case bounds for gc latency, MMU, and throughput that are independent of mutator behavior and the volume of reachable storage. Regional collection improves upon the worst-case pause times and MMU seen in most other general-purpose collectors, including garbage-first and concurrent mark/sweep collectors.
Communications of The ACM | 2015
William D. Clinger
and temporal locality. Heretofore, most of those proofs have dealt with imperative algorithms over arrays. To extend the technique to functional algorithms, we need to make assumptions about the locality of object allocation and garbage collection. Blelloch and Harper suggest we analyze the costs of functional algorithms by assuming objects are allocated sequentially in cache memory, with each new object adjacent to the previously allocated object, and garbage collection preserves this correspondence between allocation order and memory order. Their key abstraction defines a compact data structure as one for which there exists a fixed constant k, independent of the size of the structure, such that the data structure can be traversed with at most k cache misses per object. Using that abstraction and those two assumptions, the authors show how efficient cache-oblivious functional algorithms over lists and trees can be expressed and analyzed as easily as imperative algorithms over arrays. Not all storage allocators and garbage collectors satisfy the critical assumptions of this paper, but some do. In time, as cache-oblivious functional algorithms become more common, we can expect even more implementations to satisfy those assumptions (or to come close enough). After all, we computer scientists have a big advantage over the physicists: We can improve both the simplicity and the accuracy of our models by improving the reality we are modeling.