David Sabel
Goethe University Frankfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Sabel.
Mathematical Structures in Computer Science | 2008
David Sabel; Manfred Schmidt-Schauss
We present a higher-order call-by-need lambda calculus enriched with constructors, case expressions, recursive letrec expressions, a seq operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in the form of a single-step rewriting system that defines a (non-deterministic) normal-order reduction. This strategy can be made fair by adding resources for book-keeping. As equational theory, we use contextual equivalence (that is, terms are equal if, when plugged into any program context, their termination behaviour is the same), in which we use a combination of may-and must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations with respect to normal-order reduction are the same as for fair normal-order reduction. We develop a number of proof tools for proving correctness of program transformations. In particular, we prove a context lemma for both may-and must-convergence that restricts the number of contexts that need to be examined for proving contextual equivalence. Combining this with so-called complete sets of commuting and forking diagrams, we show that all the deterministic reduction rules and some additional transformations preserve contextual equivalence. We also prove a standardisation theorem for fair normal-order reduction. The structure of the ordering ≤c is also analysed, and we show that Ω is not a least element and ≤c already implies contextual equivalence with respect to may-convergence.
Journal of Functional Programming | 2008
Manfred Schmidt-Schauss; David Sabel; Marko Schütz
This paper proves correctness of Nockers method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work Clark, Hankin and Hunt did on the correctness of the abstract reduction rules in two aspects. Our correctness proof is based on a functional core language and a contextual semantics, thus proving a wider range of strictness-based optimizations as correct, and our method fully considers the cycle detection rules, which contribute to the strength of Nockers strictness analysis. Our algorithm SAL is a reformulation of Nockers strictness analysis algorithm in a functional core language LR. This is a higher order call-by-need lambda calculus with case , constructors, letrec , and seq , which is extended during strictness analysis by set constants like Top or Inf, denoting sets of expressions, which indicate different evaluation demands. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics of LR is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a nontermination checker. The proof of its correctness and hence of Nockers strictness analysis is based mainly on an exact analysis of the lengths of evaluations, i.e., normal-order reduction sequences to WHNF. The main measure being the number of “essential” reductions in evaluations. Our tools and results provide new insights into call-by-need lambda calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nockers strictness analysis in Clean, and also for its use in Haskell.
principles and practice of declarative programming | 2011
David Sabel; Manfred Schmidt-Schauß
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskells seq-operator, which for instance justifies the use of the do-notation.
Electronic Notes in Theoretical Computer Science | 2007
Joachim Niehren; David Sabel; Manfred Schmidt-Schauß; Jan Schwinghammer
We present an observational semantics for @l(fut), a concurrent @l-calculus with reference cells and futures. The calculus @l(fut) models the operational semantics of the concurrent higher-order programming language Alice ML. Our result is a powerful notion of equivalence that is the coarsest nontrivial congruence distinguishing observably different processes. It justifies a maximal set of correct program transformations, and it includes all of @l(fut)s deterministic reduction rules, in particular, call-by-value @b-reduction.
Theoretical Computer Science | 2010
Manfred Schmidt-Schauβ; David Sabel
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in call-by-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the @p-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
logic in computer science | 2012
David Sabel; Manfred Schmidt-Schauss
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a lambda and process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus PF which comprises data constructors, case-expressions, letrec-expressions, and Haskells seq. Our main result is conservativity of CHF as extension of PF. This allows us to argue that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that conservativity does no longer hold if the extension includes Concurrent Haskell and unsafe Interleave IO.
rewriting techniques and applications | 2010
Manfred Schmidt-Schauss; David Sabel; Elena Machkasova
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramskys lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramskys lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
ifip international conference on theoretical computer science | 2008
Manfred Schmidt-Schauß; Joachim Niehren; Jan Schwinghammer; David Sabel
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
principles and practice of declarative programming | 2015
Manfred Schmidt-Schauß; David Sabel
An improvement is a correct program transformation that optimizes the program, where the criterion is that the number of computation steps until a value is obtained is not increased in any context. This paper investigates improvements an untyped call-by-need lambdacalculus with letrec, case, constructors and seq. Besides showing that several local optimizations are improvements, the main result of the paper is a proof that common subexpression elimination is correct and an improvement, which proves a conjecture and thus closes a gap in the improvement theory of Moran and Sands. We also prove that several different length measures used for improvement in the call-by-need calculus of Moran and Sands and our calculus are equivalent.
Logical Methods in Computer Science | 2015
Manfred Schmidt-Schauß; David Sabel; Elena Machkasova
This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskells seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramskys lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howes method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.