Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joshua Dunfield is active.

Publication


Featured researches published by Joshua Dunfield.


international joint conference on automated reasoning | 2010

Beluga: a framework for programming and reasoning with deductive systems (system description)

Brigitte Pientka; Joshua Dunfield

Beluga is an environment for programming and reasoning about formal systems given by axioms and inference rules. It implements the logical framework LF for specifying and prototyping formal systems via higher-order abstract syntax. It also supports reasoning: the user implements inductive proofs about formal systems as dependently typed recursive functions. A distinctive feature of Beluga is that it not only represents binders using higher-order abstract syntax, but directly supports reasoning with contexts. Contextual objects represent hypothetical and parametric derivations, leading to compact and elegant proofs. Our test suite includes standard examples such as the Church-Rosser theorem, type uniqueness, proofs about compiler transformations, and preservation and progress for various ML-like languages. We also implemented proofs of structural properties of expressions and paths in expressions. Stating these properties requires nesting of quantifiers and implications, demonstrating the expressive power of Beluga.


symposium on principles of programming languages | 2004

Tridirectional typechecking

Joshua Dunfield; Frank Pfenning

In prior work we introduced a pure type assignment system that encompasses a rich set of property types, including intersections, unions, and universally and existentially quantified dependent types. This system was shown sound with respect to a call-by-value operational semantics with effects, yet is inherently undecidable.In this paper we provide a decidable formulation for this system based on bidirectional checking, combining type synthesis and analysis following logical principles. The presence of unions and existential quantification requires the additional ability to visit subterms in evaluation position before the context in which they occur, leading to a tridirectional type system. While soundness with respect to the type assignment system is immediate, completeness requires the novel concept of contextual type annotations, introducing a notion from the study of principal typings into the source program.


principles and practice of declarative programming | 2008

Programming with proofs and explicit contexts

Brigitte Pientka; Joshua Dunfield

This paper explores a new point in the design space of functional programming: functional programming with dependently-typed higher-order data structures described in the logical framework LF. This allows us to program with proofs as higher-order data. We present a decidable bidirectional type system that distinguishes between dependently-typed data and computations. To support reasoning about open data, our foundation makes contexts explicit. This provides us with a concise characterization of open data, which is crucial to elegantly describe proofs. In addition, we present an operational semantics for this language based on higherorder pattern matching for dependently typed objects. Based on this development, we prove progress and preservation


foundations of software science and computation structure | 2003

Type assignment for intersections and unions in call-by-value languages

Joshua Dunfield; Frank Pfenning

We develop a system of type assignment with intersection types, union types, indexed types, and universal and existential dependent types that is sound in a call-by-value functional language. The combination of logical and computational principles underlying our formulation naturally leads to the central idea of type-checking subterms in evaluation order. We thereby provide a uniform generalization and explanation of several earlier isolated systems. The proof of progress and type preservation, usually formulated for closed terms only, relies on a notion of definite substitution.


programming language design and implementation | 2012

Type-directed automatic incrementalization

Yan Chen; Joshua Dunfield; Umut A. Acar

Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes. In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing what can change over time, compile a conventional program that assumes its data to be static (unchanging over time) to an incremental program. Based on recent advances in self-adjusting computation, including a theoretical proposal for translating purely functional programs to self-adjusting programs, we develop techniques for translating conventional Standard ML programs to self-adjusting programs. By extending the Standard ML language, we design a fully featured programming language with higher-order features, a module system, and a powerful type system, and implement a compiler for this language. The resulting programming language, LML, enables translating conventional programs decorated with simple type annotations into incremental programs that can respond to changes in their data correctly and efficiently. We evaluate the effectiveness of our approach by considering a range of benchmarks involving lists, vectors, and matrices, as well as a ray tracer. For these benchmarks, our compiler incrementalizes existing code with only trivial amounts of annotation. The resulting programs are often asymptotically more efficient, leading to orders of magnitude speedups in practice.


programming languages meets program verification | 2007

Refined typechecking with Stardust

Joshua Dunfield

We present Stardust, an implementation of a type system for a subset of ML with type refinements, intersection types, and union types, enabling programmers to legibly specify certain classes of program invariants that are verified at compile time. This is the first implementation of unrestricted intersection and union types in a mainstream functional programming setting, as well as the first implementation of a system with both datasort and index refinements. The system-with the assistance of external constraint solvers-supports integer, Boolean and dimensional index refinements; we apply both value refinements (to check red-black tree invariants) and invaluable refinements (to check dimensional consistency). While typechecking with intersection and union types is intrinsically complex, our experience so far suggests that it can be practical in many instances.


international conference on functional programming | 2013

Complete and easy bidirectional typechecking for higher-rank polymorphism

Joshua Dunfield; Neelakantan R. Krishnaswami

Bidirectional typechecking, in which terms either synthesize a type or are checked against a known type, has become popular for its scalability (unlike Damas-Milner type inference, bidirectional typing remains decidable even for very expressive type systems), its error reporting, and its relative ease of implementation. Following design principles from proof theory, bidirectional typing can be applied to many type constructs. The principles underlying a bidirectional approach to polymorphism, however, are less obvious. We give a declarative, bidirectional account of higher-rank polymorphism, grounded in proof theory; this calculus enjoys many properties such as eta-reduction and predictability of annotations. We give an algorithm for implementing the declarative system; our algorithm is remarkably simple and well-behaved, despite being both sound and complete.


Electronic Notes in Theoretical Computer Science | 2009

Case Analysis of Higher-Order Data

Joshua Dunfield; Brigitte Pientka

We discuss coverage checking for data that is dependently typed and is defined using higher-order abstract syntax. Unlike previous work on coverage checking for closed data, we consider open data which may depend on some context. Our work may therefore provide insight into coverage checking in Twelf, and serve as a basis for coverage checking in functional languages such as Delphin and Beluga. More generally, our work is a foundation for proofs by case analysis in systems that reason about higher-order abstract syntax.


international conference on functional programming | 2012

Elaborating intersection and union types

Joshua Dunfield

Designing and implementing typed programming languages is hard. Every new type system feature requires extending the metatheory and implementation, which are often complicated and fragile. To ease this process, we would like to provide general mechanisms that subsume many different features. In modern type systems, parametric polymorphism is fundamental, but intersection polymorphism has gained little traction in programming languages. Most practical intersection type systems have supported only refinement intersections, which increase the expressiveness of types (more precise properties can be checked) without altering the expressiveness of terms; refinement intersections can simply be erased during compilation. In contrast, unrestricted intersections increase the expressiveness of terms, and can be used to encode diverse language features, promising an economy of both theory and implementation. We describe a foundation for compiling unrestricted intersection and union types: an elaboration type system that generates ordinary λ-calculus terms. The key feature is a Forsythe-like merge construct. With this construct, not all reductions of the source program preserve types; however, we prove that ordinary call-by-value evaluation of the elaborated program corresponds to a type-preserving evaluation of the source program. We also describe a prototype implementation and applications of unrestricted intersections and unions: records, operator overloading, and simulating dynamic typing.


international conference on functional programming | 2011

Implicit self-adjusting computation for purely functional programs

Yan Chen; Joshua Dunfield; Matthew A. Hammer; Umut A. Acar

Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important subject of study in programming languages. Building on this work, recent advances in self-adjusting computation have developed techniques that enable programs to respond automatically and efficiently to dynamic changes in their inputs. Self-adjusting programs have been shown to be efficient for a reasonably broad range of problems but the approach still requires an explicit programming style, where the programmer must use specific monadic types and primitives to identify, create and operate on data that can change over time. We describe techniques for automatically translating purely functional programs into self-adjusting programs. In this implicit approach, the programmer need only annotate the (top-level) input types of the programs to be translated. Type inference finds all other types, and a type-directed translation rewrites the source program into an explicitly self-adjusting target program. The type system is related to information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well-typed self-adjusting programs and preserves the source programs input-output behavior, guaranteeing that translated programs respond correctly to all changes to their data. Using a cost semantics, we also prove that the translation preserves the asymptotic complexity of the source program.

Collaboration


Dive into the Joshua Dunfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Pfenning

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Umut A. Acar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Khurram A. Jafery

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge