Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pablo E. Martínez López is active.

Publication


Featured researches published by Pablo E. Martínez López.


Information & Software Technology | 2008

A preliminary study on various implementation approaches of domain-specific language

Tomaz Kosar; Pablo E. Martínez López; Pablo Andrés Barrientos; Marjan Mernik

Various implementation approaches for developing a domain-specific language are available in literature. There are certain common beliefs about the advantages/disadvantages of these approaches. However, it is hard to be objective and speak in favor of a particular one, since these implementation approaches are normally compared over diverse application domains. The purpose of this paper is to provide empirical results from ten diverse implementation approaches for domain-specific languages, but conducted using the same representative language. Comparison shows that these discussed approaches differ in terms of the effort need to implement them, however, the effort needed by a programmer to implement a domain-specific language should not be the only factor taken into consideration. Another important factor is the effort needed by an end-user to rapidly write correct programs using the produced domain-specific language. Therefore, this paper also provides empirical results on end-user productivity, which is measured as the lines of code needed to express a domain-specific program, similarity to the original notation, and how error-reporting and debugging are supported in a given implementation.


european symposium on programming | 2003

Tagging, encoding, and jones optimality

Olivier Danvy; Pablo E. Martínez López

A partial evaluator is said to be Jones-optimal if the result of specializing a self-interpreter with respect to a source program is textually identical to the source program, modulo renaming. Jones optimality has already been obtained if the self-interpreter is untyped. If the selfinterpreter is typed, however, residual programs are cluttered with type tags. To obtain the original source program, these tags must be removed. A number of sophisticated solutions have already been proposed. We observe, however, that with a simple representation shift, ordinary partial evaluation is already Jones-optimal, modulo an encoding. The representation shift amounts to reading the type tags as constructors for higherorder abstract syntax. We substantiate our observation by considering a typed self-interpreter whose input syntax is higher-order. Specializing this interpreter with respect to a source program yields a residual program that is textually identical to the source program, modulo renaming.


mathematical foundations of computer science | 1996

From Specifications to Programs: A Fork-Algebraic Approach to Bridge the Gap

Gabriel Alfredo Baum; Marcelo F. Frias; Armando Martin Haeberer; Pablo E. Martínez López

The development of programs from first-order specifications has as its main difficulty that of dealing with universal quantifiers. This work is focused in that point, i.e., in the construction of programs whose specifications involve universal quantifiers. This task is performed within a relational calculus based on fork algebras. The fact that first-order theories can be translated into equational theories in abstract fork algebras suggests that such work can be accomplished in a satisfactory way. Furthermore, the fact that these abstract algebras are representable guarantees that all properties valid in the standard models are captured by the axiomatization given for them, allowing the reasoning formalism to be shifted back and forth between any model and the abstract algebra. In order to cope with universal quantifiers, a new algebraic operation — relational implication — is introduced. This operation is shown to have deep significance in the relational statement of first-order expressions involving universal quantifiers. Several algebraic properties of the relational implication are stated showing its usefulness in program calculation. Finally, a non-trivial example of derivation is given to asses the merits of the relational implication as an specification tool, and also in calculation steps, where its algebraic properties are clearly appropriate as transformation rules.


Lecture Notes in Computer Science | 1998

Explicit Substitutions for Objects and Functions

Delia Kesner; Pablo E. Martínez López

This paper proposes an implementation of objects and functions via a calculus with explicit substitutions which is confluent and preserves strong normalization. The source calculus corresponds to the combination of the ς-calculus of Abadi and Cardelli [AC96] and the λ-calculus, and the target calculus corresponds to an extension of the former calculus with explicit substitutions. The interesting feature of our calculus is that substitutions are separated — and treated accordingly — in two different kinds: those used to encode ordinary substitutions and those encoding invoke substitutions. When working with explicit substitutions, this differentiation is essential to encode λ-calculus into ς-calculus in a conservative way, following the style proposed in [AC96].


international conference on functional programming | 1997

Protein folding meets functional programming (poster)

Natalio Krasnogor; Pablo E. Martínez López; Pablo Mocciola; David Pelta

In the last few years an entirely new discipline has emerged: ‘Computational Biology’. It tries to solve, baaed on a strong mathematical and computational background, problems raised from Biosciences. The Protein Folding Problem (PF for short) is one of the most important open problems in Biology. It can be stated as follows: given an unfolded aminoacid sequence, find the ‘right’ folding of that sequence. In nature, the proteins fold to their ‘native’ state, which determines its functionality. Some lattice-based computational models of the PF were shown NPComplete, others remain NP-hard [2, 9], but some ap proximation algorithms errist [3]. However its theoretical and practical relevance [8, 9] makes worthwhile spending resources and time in modeling the folding process. Usually, strong enfaais is put in the results obtained, rather that in the way they are generated, enlarging the gap between researchers from Computer Science and Biology. We claim that, using the right tools, both communities can collaborate much closer, enhancing the results at the same time. Historically, ‘Functional Programming’ (FP for short) [1] has been associated with a small scope of applications, mainly academic. Computer Science community did not pay enough attention to its potential, perhaps due to the lack of efficiency of functional languages. Now, new theoretical developments in the field of FP [4] are emerging, and better languages (e.g. Haakell [7], Concurrent Haekell [5]) have been defined and implemented. Also, the gap between theory and practice is smaller in this paradigm than that of other paradigms, making FP a good choice for developing simulation and optimization programs [10]. Traditionally, all programs for optimization problems were written in C, C++ or Ada; this builds a firewall between developers and end-users. PF is suitable to be modeled with a lazy concurrent functional language for many reasons: non-computer-science people can think in a very high abstraction level and map their ideas, aimost directly, to functional code; the learning curve of a FP language is smoother than that of an imperative one, bridging the gap between developera and usera; functional code is concise; the folding process is intrinsically parallel and FP is specially adequate for managing parallelism; concurrent processes on the string to be folded can be simulated using easy-to-use features of concurrent functional


international conference on functional programming | 1997

Modelling string folding with G2L grammars (poster)

Natalio Krasnogor; Pablo E. Martínez López; Pablo Mocciola; David Pelta

In the last few years an entirely new discipline has emerged, ‘Computational Biology’. lt tties to solve problems raised from Biosciences using mathematical and computational tools. The Protein Folding is one of the most important open problem in Biology due to its theoretical and pragmatic implications. In order to study the Protein Folding Problem in an abstract way, we will use a generalization of it due to Paterson and Przytycka [4]. In their paper they consider the String Folding Problem, with can be stated aa follows: given a finite string S, an integer k, and a grid G, is there a fold of S in G with a score at least k? A fold of S in G is defined as an injective mapping F: [1. . .n]-G, where n=l Sl, andifl<i, j~ n, i= j1 then F(i) is adjacent to F(j) in G; the score of F is computed counting the number of identical symbol pairs mapped to adjacent nodes of G, calling those pairs bonds. In their paper, Paterson and Przytycka show that Strtng-Folding is NPComplete in the Z2 and Z3, while other instances of the problem remain NP-Hard. In our work we try to model the process of string folding using an extension of L-system to generate a family of restricted parallel graph grammars. The biologist Aristid Lindenmayer develops what it came to be named ‘Lindenmayer Parallel rewriting systems’ when he was trying to model development in plants [2]. A basic L-system is a grammar G={E, II, a}, where X is a finite set of symbols called the alphabet, II is the set of rewriting rules and crEX* is the starting string that generates the language. The most simply L-system is context-independent, taking II with the structure {r : X+X”}, with means that a simple character of a string S maps to a string of Z“. One of the most important features of L-systems is that the rewriting rules are applied in parallel all over the original string, while in other grammars, rules apply sequentially. We can extend L-systems to allow context-sensitivity, if the rewriting rules are of the form: L{ P) R+-S, with PEZ and L, RcE*. The traditional interpretation to L-systems are ‘Logo-like’ draws. The G2L grammars are a neat extension to L-systems that can be used to specify arbitrary graphs (see [1] for a detailed description). We are researching the use of restricted G2L grammars as a describing tool for string foldings. Our aim is to represent the graph induced by the mapping F() using this new subfamily of grammars. We must be able to


BRICS Report Series | 2003

RS-2 Tagging, Encoding, and Jones Optimality

Olivier Danvy; Pablo E. Martínez López

A partial evaluator is said to be Jones-optimal if the result of specializing a self-interpreter with respect to a source program is textually identical to the source program, modulo renaming. Jones optimality has already been obtained if the self-interpreter is untyped. If the self-interpreter is typed, however, residual programs are cluttered with type tags. To obtain the original source program, these tags must be removed. A number of sophisticated solutions have already been proposed. We observe, however, that with a simple representation shift, ordinary partial evaluation is already Jones-optimal, modulo an encoding. The representation shift amounts to reading the type tags as constructors for higher-order abstract syntax. We substantiate our observation by considering a typed self-interpreter whose input syntax is higher-order. Specializing this interpreter with respect to a source program yields a residual program that is textually identical to the source program, modulo renaming.


international conference on functional programming | 1998

A functional programming approach to hypermedia authoring

Daniel H. Marcos; Pablo E. Martínez López; Walter A. Risi

Hypermedia authoring pie951 faces numerous challenges today. The expansive growth of the WWW and the increasing accesibility to multimedia technologies have made hypermedia preferable to traditional media. As expected, this growth has risen the requirements for tools which ease the hypermedia-generation process. While naive approaches to authoring consist in using WYSIWYG tools directly design and implementation is done in a unique phase -, a more structured approach is needed when authoring in-the-large. Several methodologies were proposed for systematic hypermedia design. Methodologies make a clear separation between the different phases in the hypermedia generation process (conceptual design, navigational design, implementation, etc.) see for example pSB95, SR95]. A common feature of these methodologies is the separation between the design and the implementation of the hypermedia application. In [FNNQB], it is argued that these approaches can leave a wide gap between initial design and iinal production. Consequently, design problems are detected very late and can be very expensive to fix. In [FNNQG, NN95] it is stated that, while a structured approach is necessary, low-cost prototyping is also very important for early evaluation of hypermedias. Functional Programming (FP for short) offers a number of advantages to programmers [BWSS, Hug89]. It features a high level of abstraction, modularity and a con&e declarative style which makes programs easy to read and understand. An attractive feature of FP is that it allows programmers to map their ideas almost directly to functional code, bridging the gap between specification and implementation. This work presents HyCom (Hypermedia Combinators), an hypermedia-authoring framework based in the functional language H&cell [PH+97]. HyCom provides constructions to specify navigational structures (nodes, links) and interface issues in an abstract, platform-independent way, in a style similar to that in [vDMQB]. HyCom uses transformers and combinators a programming technique widely used in FP [HugQS, FJQ6, Hud96] -to express relationships between hypermedia components. HyCom offers several attractive features. On one hand, a functional language is used as a hypermedia-specification language, and thus authors benefit from FP’s well-known expressiveness. On the other hand, using FP provides a high-abstraction level in the design process, and also allows a modular, structured approach to design. Furthermore, hypermedia designs expressed in HyCom can be automatically rendered to WWW pages or other platforms, by using appropiate rendering functions thus prototyping and final implementation involves no additional effort from the author. This is a consequence of the short distance between specification and implementation obtained by using FP. We propose using a FP-based framework for specifying and building hypermedias. We think this is an interesting proposal, since FP expressiveness is well-suited for ab stract specifications, which indeed can be translated almost directly to working implementations a desirable feature for achieving low-cost prototyping. Furthermore, we think this is an original contribution, since the hypermedia field has been traditionally distant from FP.


partial evaluation and semantic-based program manipulation | 2002

Principal type specialisation

Pablo E. Martínez López; John Hughes


V Congreso Argentino de Ciencias de la Computación | 1999

Effective mapping of hypermedia high-level design primitives to implementation environments

Pablo E. Martínez López; Daniel H. Marcos; Walter A. Risi

Collaboration


Dive into the Pablo E. Martínez López's collaboration.

Top Co-Authors

Avatar

Pablo Mocciola

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar

Daniel H. Marcos

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar

Walter A. Risi

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Pelta

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar

Gabriel Alfredo Baum

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Federico Feller

National University of La Plata

View shared research outputs
Top Co-Authors

Avatar

Germán Esteban Ruiz

National University of La Plata

View shared research outputs
Researchain Logo
Decentralizing Knowledge