Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ricardo Peña is active.

Publication


Featured researches published by Ricardo Peña.


Archive | 1998

Implementation of Functional Languages

Phil Trinder; Greg Michaelson; Ricardo Peña

The purpose of the Hume language design is to explore the expressibility/decidability spectrum in resource-constrained systems, such as real-time embedded or control systems. It is unusual in being based on a combination of λ-calculus and finite state machine notions, rather than the more usual propositional logic, or flat finite-statemachine models. It provides a number of high level features including polymorphic types, arbitrary but sized user-defined data structures and automatic memory management, whilst seeking to guarantee strong space/time behaviour and maintaining overall determinacy. A key issue is predictable space behaviour. This paper describes a simple model for calculating stack and heap costs in FSM-Hume, a limited subset of full Hume. This cost model is evaluated against an example taken from the research literature: a simple mine drainage control system. Empirical results suggest that our model is a good predictor of stack and heap usage, and that this can lead to good bounded memory utilisation.


Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 2003

Comparing Parallel Functional Languages: Programming and Performance

Hans-Wolfgang Loidl; Fernando Rubio; Norman Scaife; Kevin Hammond; Susumu Horiguchi; Ulrike Klusik; Rita Loogen; Greg Michaelson; Ricardo Peña; Steffen Priebe; Á J. Rebón; Phil Trinder

This paper presents a practical evaluation and comparison of three state-of-the-art parallel functional languages. The evaluation is based on implementations of three typical symbolic computation programs, with performance measured on a Beowulf-class parallel architecture.We assess three mature parallel functional languages: PMLS, a system for implicitly parallel execution of ML programs; GPH, a mainly implicit parallel extension of Haskell; and Eden, a more explicit parallel extension of Haskell designed for both distributed and parallel execution. While all three languages employ a completely implicit approach to communication, each language takes a different approach to specifying and controlling parallelism, ranging from explicit identification of processes as language constructs (Eden) through annotation of potential parallelism (GPH) to automatic detection of parallel skeletons in sequential code (PMLS).We present detailed performance measurements of all three systems on a widely available parallel architecture: a Beowulf cluster of low-cost commodity workstations. We use three representative symbolic applications: a matrix multiplication algorithm, an exact linear system solver, and a simple ray-tracer. Our results show how moderate speedups can be achieved with little or no changes to the sequential code, and that parallel performance can be significantly improved even within our high-level model of parallel functional programming by controlling key aspects of the program such as load distribution and thread granularity.


Patterns and skeletons for parallel and distributed computing | 2003

Parallelism abstractions in eden

Rita Loogen; Yolanda Ortega; Ricardo Peña; Steffen Priebe; Fernando Rubio

Two important abstractions have contributed to create a reliable programming methodology for industrial-strength programs These are functional abstraction (which has received different names in programming languages, such as procedure, subroutine, function, etc), and data abstraction (also with different names such as abstract data type, object, package or simply module). In both abstractions two different pieces of information are distinguished:


high level parallel programming models and supportive environments | 1997

The Eden coordination model for distributed memory systems

Silvia Breitinger; Rita Loogen; Yolanda Ortega-Mallén; Ricardo Peña

Eden is a concurrent declarative language that aims at both the programming of reactive systems and parallel algorithms on distributed memory systems. In this paper, we explain the computation and coordination model of Eden. We show how lazy evaluation in the computation language is fruitfully combined with the coordination language that is specifically designed for multicomputers and that aims at maximum parallelism.


international conference on functional programming | 1996

A new look at pattern matching in abstract data types

Pedro Palao Gostanza; Ricardo Peña; Manuel Núñez

In this paper we present a construction smoothly integrating pattern matching with abstract data types. We review some previous proposals [19, 23, 20, 6, 1] and their drawbacks, and show how our proposal can solve them. In particular we pay attention to equational reasoning about programs containing this new facility. We also give its formal syntax and semantics, as well as some guidelines in order to compile the construction efficiently.


principles and practice of declarative programming | 2001

Parallel functional programming at two levels of abstraction

Ricardo Peña; Fernando Rubio

The parallel functional language Eden extends Haskell with expressions to define and instantiate process systems. These extensions allow also the easy definition of skeletons as higherorder functions. P arallel programming is possible in Eden at two levels: Recursive programming and higher-order programming. At the lower level, processes are explicitly created by using recursive definitions. In this way, skeletons can be defined. This is very un usual, as most skeleton-based languages use an imperative language to create new skeletons. At the higher level, available sk eletons are used to create applications or to define new skeletons on top of the other ones. In this paper, we present five skeletons, most of them wellkno wn, covering a wide range of parallel structures. F or each one, sev eral Eden implementations are given, together with their corresponding cost models. Finally, some examples of application programming are shown, including predicted and actual results on a Beowulf cluster.


implementation and application of functional languages | 1998

Implementing Eden - or: Dreams Become Reality

Ulrike Klusik; Yolanda Ortega-Mallén; Ricardo Peña

The parallel functional programming language Eden was specially designed to be implemented in a distributed setting. In a previous paper [3] we presented an operational specification of DREAM, the distributed abstract machine for Eden. In this paper we go a step further and present the imperative code generated for Eden expressions and how this code interacts with the distributed RunTime System (RTS) for Eden. This translation is done in two steps: first Eden is translated into PEARL (Parallel Eden Abstract Reduction Language), the parallel functional language of DREAM, and then PEARL expressions are translated into imperative code.


Journal of Functional Programming | 2009

From natural semantics to c: A formal derivation of two stg machines

Alberto de la Encina; Ricardo Peña

The Spineless Tag-less G-machine (STG machine) was defined as the target abstract machine for compiling the lazy functional language Haskell. It is at the heart of the Glasgow Haskell Compiler (GHC) which is claimed to be the Haskell compiler that generates the most efficient code. A high-level description of the STG machine can be found in Peyton Jones (In Journal of Functional programming, 2(2), 127–202, 1992), Marlow & Peyton Jones (In Sigplan Not., 39(9), 4–5, 2004), and Marlow & Peyton Jones (In Journal of Functional Programming, 16(4–5), 415–449, 2006). Should the reader be interested in a more detailed view, then the only additional information available is the Haskell code of GHC and the C code of its runtime system. It is hard to prove that this machine correctly implements the lazy semantics of Haskell. Part of the problem lies in the fact that the STG machine executes a bare-bones functional language, called STGL, much lower level than Haskell. Therefore, part of the correctness should be—and it is—established by showing that the translation from Haskell to STGL preserves Haskells semantics. The other part involves showing that the STG machine correctly implements the lazy semantics of STGL. In this paper we provide a step-by-step formal derivation of the STG machine and of its compilation to C, starting from a natural semantics of STGL. Thus, our starting point is higher level than the descriptions found Peyton Jones (In Journal of Functional programming, 2(2), 127–202, 1992) and Marlow & Peyton Jones (In Sigplan Not., 39(9), 4–5, 2004), and our arrival point is lower level than those works. Additionally, there has been substantial changes between the so-called push/enter model of the STG machine described in Peyton Jones (In Journal of Functional programming, 2(2), 127–202, 1992), and the eval/apply model of the STG machine described in Marlow & Peyton Jones (In Sigplan Not., 39(9), 4–5, 2004). So, in fact, we derive two machines instead of one, starting from the same initial semantics. At each step we provide enough intuitions and explanations in order to understand the refinement, and then the formal definitions and statements proving that the derivation step is sound and complete. The main contribution of the paper is to show that an efficient machine such as the STG can be presented, understood, and formally reasoned about at different levels of abstraction.


principles and practice of declarative programming | 2003

Formally deriving an STG machine

Alberto de la Encina; Ricardo Peña

Starting from P. Sestoft semantics for lazy evaluation, we define a new semantics in which normal forms consist of variables pointing to lambdas or constructions. This is in accordance with the more recent changes in the Spineless Tagless G-machine (STG) machine, where constructions only appear in closures (lambdas only appeared in closures already in previous versions). We prove the equivalence between the new semantics and Sestofts. Then, a sequence of STG machines are derived, formally proving the correctness of each derivation. The last machine consists of a few imperative instructions and its distance to a conventional language is minimal.The paper also discusses the differences between the final machine and the actual STG machine implemented in the Glasgow Haskell Compiler.


implementation and application of functional languages | 2003

Building an interface between eden and maple: a way of parallelizing computer algebra algorithms

Rafael Martínez; Ricardo Peña

Eden is a parallel functional language extending Haskell with processes. This paper describes the implementation of an interface between the Eden language and the Maple system. The aim of this effort is to parallelize Maple programs by using Eden as coordination language. The idea is to leave in Maple the computational intensive functions of the (sequential) algorithm and to use Eden skeletons to set up the parallel process topology in the available parallel machine. A Maple system is instantiated in each processor. Eden processes are responsible for invoking Maple functions with appropriate parameters and of getting back the results, as well as of performing all the data communication between processes. The interface provides the following services: instantiating and terminating a Maple system in each processor, performing data conversion between Maple and Haskell objects, invoking Maple functions from Eden, and ensuring mutual exclusion in the access to Maple from different concurrent threads in the local processor. A parallel version of Buchberger’s algorithm to compute Gröbner bases is presented to illustrate the use of the interface.Eden is a parallel functional language extending Haskell with processes. This paper describes the implementation of an interface between the Eden language and the Maple system. The aim of this effort is to parallelize Maple programs by using Eden as coordination language. The idea is to leave in Maple the computational intensive functions of the (sequential) algorithm and to use Eden skeletons to set up the parallel process topology in the available parallel machine. A Maple system is instantiated in each processor. Eden processes are responsible for invoking Maple functions with appropriate parameters and of getting back the results, as well as of performing all the data communication between processes. The interface provides the following services: instantiating and terminating a Maple system in each processor, performing data conversion between Maple and Haskell objects, invoking Maple functions from Eden, and ensuring mutual exclusion in the access to Maple from different concurrent threads in the local processor. A parallel version of Buchberger’s algorithm to compute Grobner bases is presented to illustrate the use of the interface.

Collaboration


Dive into the Ricardo Peña's collaboration.

Top Co-Authors

Avatar

Fernando Rubio

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Cristóbal Pareja

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Manuel Montenegro

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yolanda Ortega-Mallén

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Alberto de la Encina

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Luis A. Galán

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Olha Shkaravska

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge