Gabriele Keller
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabriele Keller.
international conference on functional programming | 2005
Manuel M. T. Chakravarty; Gabriele Keller; Simon L. Peyton Jones
Haskell programmers often use a multi-parameter type class in which one or more type parameters are functionally dependent on the first. Although such functional dependencies have proved quite popular in practice, they express the programmers intent somewhat indirectly. Developing earlier work on associated data types, we propose to add functionally dependent types as type synonyms to type-class bodies. These associated type synonyms constitute an interesting new alternative to explicit functional dependencies.
workshop on declarative aspects of multicore programming | 2011
Manuel M. T. Chakravarty; Gabriele Keller; Sean Lee; Trevor L. McDonell; Vinod Grover
Current GPUs are massively parallel multicore processors optimised for workloads with a large degree of SIMD parallelism. Good performance requires highly idiomatic programs, whose development is work intensive and requires expert knowledge. To raise the level of abstraction, we propose a domain-specific high-level language of array computations that captures appropriate idioms in the form of collective array operations. We embed this purely functional array language in Haskell with an online code generator for NVIDIAs CUDA GPGPU programming environment. We regard the embedded languages collective array operations as algorithmic skeletons; our code generator instantiates CUDA implementations of those skeletons to execute embedded array programs. This paper outlines our embedding in Haskell, details the design and implementation of the dynamic code generator, and reports on initial benchmark results. These results suggest that we can compete with moderately optimised native CUDA code, while enabling much simpler source programs.
workshop on declarative aspects of multicore programming | 2007
Manuel M. T. Chakravarty; Roman Leshchinskiy; Simon L. Peyton Jones; Gabriele Keller; Simon Marlow
We describe the design and current status of our effort to implement the programming model of nested data parallelism into the Glasgow Haskell Compiler. We extended the original programming model and its implementation, both of which were first popularised by the NESL language, in terms of expressiveness as well as efficiency. Our current aim is to provide a convenient programming environment for SMP parallelism, and especially multicore architectures. Preliminary benchmarks show that we are, at least for some programs, able to achieve good absolute performance and excellent speedups.
symposium on principles of programming languages | 2005
Manuel M. T. Chakravarty; Gabriele Keller; Simon L. Peyton Jones; Simon Marlow
Haskells type classes allow ad-hoc overloading, or type-indexing, of functions. A natural generalisation is to allow type-indexing of data types as well. It turns out that this idea directly supports a powerful form of abstraction called associated types, which are available in C++ using traits classes. Associated types are useful in many applications, especially for self-optimising libraries that adapt their data representations and algorithms in a type-directed manner.In this paper, we introduce and motivate associated types as a rather natural generalisation of Haskells existing type classes. Formally, we present a type system that includes a type-directed translation into an explicitly typed target language akin to System F; the existence of this translation ensures that the addition of associated data types to an existing Haskell compiler only requires changes to the front end.
international conference on functional programming | 2010
Gabriele Keller; Manuel M. T. Chakravarty; Roman Leshchinskiy; Simon L. Peyton Jones; Ben Lippmeier
We present a novel approach to regular, multi-dimensional arrays in Haskell. The main highlights of our approach are that it (1) is purely functional, (2) supports reuse through shape polymorphism, (3) avoids unnecessary intermediate structures rather than relying on subsequent loop fusion, and (4) supports transparent parallelisation. We show how to embed two forms of shape polymorphism into Haskells type system using type classes and type families. In particular, we discuss the generalisation of regular array transformations to arrays of higher rank, and introduce a type-safe specification of array slices. We discuss the runtime performance of our approach for three standard array algorithms. We achieve absolute performance comparable to handwritten C code. At the same time, our implementation scales well up to 8 processor cores.
european conference on parallel processing | 2001
Manuel M. T. Chakravarty; Gabriele Keller; Roman Lechtchinsky; Wolf Pfannenstiel
This paper discusses an extension of Haskell by support for nested data-parallel programming in the style of the special-purpose language Nesl. The extension consists of a parallel array type, array comprehensions, and primitive parallel array operations. This extension brings a hitherto unsupported style of parallel programming to Haskell. Moreover, nested data parallelism should receive wider attention when available in a standardised language like Haskell.
international conference on functional programming | 2013
Trevor L. McDonell; Manuel M. T. Chakravarty; Gabriele Keller; Ben Lippmeier
Purely functional, embedded array programs are a good match for SIMD hardware, such as GPUs. However, the naive compilation of such programs quickly leads to both code explosion and an excessive use of intermediate data structures. The resulting slow-down is not acceptable on target hardware that is usually chosen to achieve high performance. In this paper, we discuss two optimisation techniques, sharing recovery and array fusion, that tackle code explosion and eliminate superfluous intermediate structures. Both techniques are well known from other contexts, but they present unique challenges for an embedded language compiled for execution on a GPU. We present novel methods for implementing sharing recovery and array fusion, and demonstrate their effectiveness on a set of benchmarks.
international conference on functional programming | 2000
Manuel M. T. Chakravarty; Gabriele Keller
This paper generalises the flattening transformation---a technique for the efficient implementation of nested data parallelism---and reconciles it with main stream functional programming. Nested data parallelism is significantly more expressive and convenient to use than the flat data parallelism typically used in conventional parallel languages like High Performance Fortran and C*. The flattening transformation of Blelloch and Sabot is a key technique for the efficient implementation of nested parallelism via flat parallelism, but originally it was severely restricted, as it did not permit general sum types, recursive types, higher-order functions, and separate compilation. Subsequent work, including some of our own, generalised the transformation and allowed higher-order functions and recursive types. In this paper, we take the final step of generalising flattening to cover the full range of types available in modern languages like Haskell and ML; furthermore, we enable the use of separate compilation. In addition, we present a completely new formulation of the transformation, which is based on the standard lambda calculus notation, and replace a previously ad-hoc transformation step by a systematic generic programming technique. First experiments demonstrate the efficiency of our approach.
symposium on principles of programming languages | 2007
Derek Dreyer; Robert Harper; Manuel M. T. Chakravarty; Gabriele Keller
ML modules and Haskell type classes have proven to be highly effective tools for program structuring. Modules emphasize explicit configuration of program components and the use of data abstraction. Type classes emphasize implicit program construction and ad hoc polymorphism. In this paper, we show how the implicitly-typed style of type class programming may be supported within the framework of an explicitly-typed module language by viewing type classes as a particular mode of use of modules. This view offers a harmonious integration of modules and type classes, where type class features, such as class hierarchies and associated types, arise naturally as uses of existing module-language constructs, such as module hierarchies and type components. In addition, programmers have explicit control over which type class instances are available for use by type inference in a given scope. We formalize our approach as a Harper-Stone-style elaboration relation, and provide a sound type inference algorithm as a guide to implementation.
generative programming and component engineering | 2004
Sean Seefried; Manuel M. T. Chakravarty; Gabriele Keller
Embedded domain specific languages (EDSLs) provide a specialised language for a particular application area while harnessing the infrastructure of an existing general purpose programming language. The reduction in implementation costs that results from this approach comes at a price: the EDSL often compiles to inefficient code since the host language’s compiler only optimises at the level of host language constructs. The paper presents an approach to solving this problem based on compile-time meta-programming which retains the simplicity of the embedded approach. We use PanTHeon, our implementation of an existing EDSL for image synthesis to demonstrate the benefits and drawbacks of this approach. Furthermore, we suggest potential improvements to Template Haskell, the meta-programming framework we are using, which would greatly improve its applicability to this kind of task.