Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher T. Haynes is active.

Publication


Featured researches published by Christopher T. Haynes.


Sigplan Notices | 1998

Revised 5 report on the algorithmic language scheme

Norman I. Adams; D. H. Bartley; G. Brooks; R. K. Dybvig; Daniel P. Friedman; R. Halstead; C. Hanson; Christopher T. Haynes; Eugene E. Kohlbecker; D. Oxley; K. M. Pitman; G. J. Rozas; Guy L. Steele; Gerald Jay Sussman; Mitchell Wand; H. Abelson

The report gives a defining description of the programming language Scheme. Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele, Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme.


ACM Sigplan Lisp Pointers | 1991

Revised 4 report on the algorithmic language scheme

H. Abelson; R. K. Dybvig; Christopher T. Haynes; G. J. Rozas; Norman I. Adams; Daniel P. Friedman; Eugene E. Kohlbecker; Guy L. Steele; D. H. Bartley; R. Halstead; D. Oxley; Gerald Jay Sussman; G. Brooks; C. Hanson; K. M. Pitman; Mitchell Wand; William D. Clinger; Jonathan Rees

The report gives a defining description of the programming language Scheme. Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme.


Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 1998

Revised Report on the Algorithmic Language Scheme

Harold Abelson; R. K. Dybvig; Christopher T. Haynes; G. J. Rozas; Norman I. Adams; Daniel P. Friedman; Eugene E. Kohlbecker; Guy L. Steele; D. H. Bartley; R. Halstead; D. Oxley; Gerald Jay Sussman; G. Brooks; C. Hanson; K. M. Pitman; Mitchell Wand

The report gives a defining description of the programming language Scheme. Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele, Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme.The introduction offers a brief history of the language and of the report.The first three chapters present the fundamental ideas of the language and describe the notational conventions used for describing the language and for writing programs in the language.Sections 5 and 6 describe the syntax and semantics of expressions, programs, and definitions.Section 7 describes Schemes built-in procedures, which include all of the languages data manipulation and input/output primitives.Section 8 provides a formal syntax for Scheme written in extended BNF, along with a formal denotational semantics. An example of the use of the language follows the formal syntax and semantics.The report concludes with a list of references and an alphabetic index and is followed by a short list of clarifications and corrections.


Computer Languages | 1986

Obtaining coroutines with continuations

Christopher T. Haynes; Daniel P. Friedman; Mitchell Wand

Abstract Continuations, when made available to the programmer as first class objects, provide a general control abstraction for sequential computation. The power of first class continuations is demonstrated by implementing a variety of coroutine mechanisms using only continuations and functional abstraction. The importance of general abstraction mechanisms such as continuations is discussed.


international conference on functional programming | 1984

Continuations and coroutines

Christopher T. Haynes; Daniel P. Friedman; Mitchell Wand

The power of first class continuations is demonstrated by implementing a variety of coroutine mechanisms using only continuations and functional abstraction. The importance of general abstraction mechanisms such as continuations is discussed.


international conference on functional programming | 1984

Engines build process abstractions

Christopher T. Haynes; Daniel P. Friedman

Engines are a new programming language abstraction for timed preemption. In conjunction with first class continuations, engines allow the language to be extended with a time-sharing implementation of process abstraction facilities. To illustrate engine programming techniques, we implement a round-robin process scheduler. The importance of simple but powerful primitives such as engines is discussed.


ACM Transactions on Programming Languages and Systems | 1987

Embedding continuations in procedural objects

Christopher T. Haynes; Daniel P. Friedman

Continuations, when available as first-class objects, provide a general control abstraction in programming languages. They liberate the programmer from specific control structures, increasing programming language extensibility. Such continuations may be extended by embedding them in procedural objects. This technique is first used to restore a fluid environment when a continuation object is invoked. We then consider techniques for constraining the power of continuations in the interest of security and efficiency. Domain mechanisms, which create dynamic barriers for enclosing control, are implemented using fluids. Domains are then used to implement an unwind-protect facility in the presence of first-class continuations. Finally, we present two mechanisms, wind-unwind and dynamic-wind, that generalize unwind-protect.


symposium on principles of programming languages | 1985

Constraining control

Daniel P. Friedman; Christopher T. Haynes

Continuations, when available as first-class objects, provide a general control abstraction in programming languages. They liberate the programmer from specific control structures, increasing programming language extensibility. Such continuations may be extended by embedding them in functional objects. This technique is first used to restore a fluid environment when a continuation object is invoked. We then consider techniques for constraining the power of continuations in the interest of security and efficiency. Domain mechanisms, which create dynamic barriers for enclosing control, are implemented using fluids. Domains are then used to implement an unwind-protect facility in the presence of first-class continuations. Finally, we demonstrate two mechanisms, wind-unwind and dynamic-wind, that generalize unwind-protect.


Archive | 1984

Programming with Continuations

Daniel P. Friedman; Christopher T. Haynes; Eugene E. Kohlbecker

Progress in programming language design has often been achieved by making an abstraction a “first class object”, one that can be passed to and returned from procedures and entered in data structures. For example, the importance of functional parameters has long been recognized, though it is only more recently that actor semantics [2] and object oriented programming have demonstrated the power of first class functional objects. This paper illustrates, with a novel example, the power of first class control objects, called continuations.


Journal of Logic Programming archive | 1987

Logic continuations

Christopher T. Haynes

There is a striking analogy between type raising, as introduced by Montague (1973), and the notion of continuation that has been developped in programming language theory in order to give compositional semantics to control operators (Stratchey and Wadsworth, 1974). In fact, this analogy is such that it is possible to see Montague’s semantics as a continuation based semantics. On the other hand, the notion of continuation allows classical logic to be given a Curry-Howard interpretation (Griffin 1990). In particular, the double negation law ((A → ⊥) → ⊥) → A is provided with a computational content, which may be used to give a type logical interpretation of type lowering. Putting the pieces of the picture together, it is possible to use “classical extensions” of the λ-calculus in order to express the semantic components of the lexical entries of Morrill’s (1994) type logical grammars. This solution offers the advantage of not burdening the syntax by enforcing type raising to the worst case. 1 Type raising and continuations Montague (1973) introduced type raising as a way of providing a compositional semantics to constructs that may give rise to scope ambiguities. Such constructs (typically, quantifiers) have semantic scopes that may be wider than their apparent syntactic scopes. Around the same time, computer scientists were trying to provide a compositional semantics to full jumps (i.e., ‘goto’ statements), which led to the discovery of continuations (Stratchey and Wadsworth, 1974). Both problems are similar, and both solutions present striking similitudes. Montague’s type raising is based on Leibniz’s principle, which consists of identifying an entity with the set of its properties. Consequently, the type of entities e is replaced by (e → t) → t, where t is the type of propositions. In programming language theory, a continuation semantics (as opposed to a direct semantics) consists in providing the semantic function with the continuation of the program as an explicit parameter. Let P be a program, let [[−]] be the semantic function, and let s be some initial state. If we consider programs as state transformers, a direct semantics is such that [[P ]] s ∈ State. On the other hand, a continuation semantics gives [[P ]] s ∈ (State → State) → State. In fact, in both cases (type raising and continuation semantics), a type A is replaced by a type (A → O) → O, where O is the type of observable entities or facts. 2 Negative translations and classical logic In the realm of the λ-calculus, the notion of continuation gave rise to the so-called CPS-transformations (Plotkin 1975). These are continuation-based syntactic transformations of the λ-terms that allow given evaluation strategies (typically, call-byname or call-by-value) to be simulated. For instance, Plotkin’s call-by-value CPS-transformation is as follows: c = λk. k c;

Collaboration


Dive into the Christopher T. Haynes's collaboration.

Top Co-Authors

Avatar

Daniel P. Friedman

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roy Rada

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Venkatesh Choppella

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge