Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michel P. Schellekens is active.

Publication


Featured researches published by Michel P. Schellekens.


Electronic Notes in Theoretical Computer Science | 1995

The Smyth Completion: A Common Foundation for Denotational Semantics and Complexity Analysis

Michel P. Schellekens

Abstract The Smyth completion ([15], [16], [18] and [19]) provides a topological foundation for Denotational Semantics. We show that this theory simultaneously provides a topological foundation for the complexity analysis of programs via the new theory of “complexity (distance) spaces”. The complexity spaces are shown to be weightable ([13], [8], [10]) and thus belong to the class of S-completable quasi-uniform spaces ([19]). We show that the S-completable spaces possess a sequential Smyth completion. The applicability of the theory to “Divide & Conquer” algorithms is illustrated by a new proof (based on the Banach theorem) of the fact that mergesort has optimal asymptotic average running time.


Topology and its Applications | 1999

Quasi-metric properties of complexity spaces

Salvador Romaguera; Michel P. Schellekens

Abstract The complexity (quasi-metric) space has been introduced as a part of the development of a topological foundation for the complexity analysis of algorithms (Schellekens, 1995). Applications of this theory to the complexity analysis of Divide and Conquer algorithms have been discussed by Schellekens (1995). Here we obtain several quasi-metric properties of the complexity space. The main results obtained are the Smyth-completeness of the complexity space and the compactness of closed complexity spaces which possess a (complexity) lower bound. Finally, some implications of these results in connection to the above mentioned complexity analysis techniques are discussed and the total boundedness of complexity spaces with a lower bound is discussed in the light of Smyths computational interpretation of this property (Smyth, 1991).


Theoretical Computer Science | 2006

Partial quasi-metrics

Hans-Peter A. Künzi; Homeira Pajoohesh; Michel P. Schellekens

In this article we introduce and investigate the concept of a partial quasi-metric and some of its applications. We show that many important constructions studied in Matthewss theory of partial metrics can still be used successfully in this more general setting. In particular, we consider the bicompletion of the quasi-metric space that is associated with a partial quasi-metric space and study its applications in groups and BCK-algebras.


Archive | 2008

A Modular Calculus for the Average Cost of Data Structuring

Michel P. Schellekens

A Modular Calculus for the Average Cost of Data Structuring introduces MOQA, a new domain-specific programming language which guarantees the average-case time analysis of its programs to be modular.Time in this context refers to a broad notion of cost, which can be used to estimate the actual running time, but also other quantitative information such as power consumption, while modularity means that the average time of a program can be easily computed from the times of its constituents--something that no programming language of this scope has been able to guarantee so far. MOQA principles can be incorporated in any standard programming language. MOQA supports tracking of data and their distributions throughout computations, based on the notion of random bag preservation. This allows a unified approach to average-case time analysis, and resolves fundamental bottleneck problems in the area. The main techniques are illustrated in an accompanying Flash tutorial, where the visual nature of this method can provide new teaching ideas for algorithms courses. This volume, with forewords by Greg Bollella and Dana Scott, presents novel programs based on the new advances in this area, including the first randomness-preserving version of Heapsort. Programs are provided, along with derivations of their average-case time, to illustrate the radically different approach to average-case timing. The automated static timing tool applies the Modular Calculus to extract the average-case running time of programs directly from their MOQA code. A Modular Calculus for the Average Cost of Data Structuring is designed for a professional audience composed of researchers and practitioners in industry, with an interest in algorithmic analysis and also static timing and power analysis--areas of growing importance. It is also suitable as an advanced-level text or reference book for students in computer science, electrical engineering and mathematics. Michel Schellekens obtained his PhD from Carnegie Mellon University, following which he worked as a Marie Curie Fellow at Imperial College London. Currently he is an Associate Professor at the Department of Computer Science in University College Cork - National University of Ireland, Cork, where he leads the Centre for Efficiency-Oriented Languages (CEOL) as a Science Foundation Ireland Principal Investigator.


Annals of the New York Academy of Sciences | 1996

On Upper Weightable Spaces

Michel P. Schellekens

The weightable quasi‐pseudo‐metric spaces have been introduced by Matthews as part of the study of the denotational semantics of dataflow networks (e.g., [6] and [7]). The study of these spaces has been continued in the context of Nonsymmetric Topology by Künzi and Vajner [4], [5]. We introduce and motivate the class of upper weightable quasi‐pseudometric spaces. The relationship with the development of a topological foundation for the complexity analysis of programs [10] is discussed, which leads to the study of the weightable optimal (quasi‐pseudo‐metric) join semilattices.


Electronic Notes in Theoretical Computer Science | 2001

Weightable quasi-metric semigroups and semilattices

Salvador Romaguera; Michel P. Schellekens

Abstract In [Sch00] a bijection has been established, for the case of semilattices, between invariant partial metrics and semivaluations. Semivaluations are a natural generalization of valuations on lattices to the context of semilattices and arise in many different contexts in Quantitative Domain Theory ([Sch00]). Examples of well known spaces which are semivaluation spaces are the Baire quasi-metric spaces of [Mat95], the complexity spaces of [Sch95] and the interval domain ([EEP97]). In [Sch00a], we have shown that the totally bounded Scott domains of [Smy91] can also be represented as semivaluation spaces. In this extended abstract we explore the notion of a semivaluation space in the context of semigroups. This extension is a natural one, since for each of the above results, an invariant partial metric is involved. The notion of invariance has been well studied for semigroups as well (e.g. [Ko82]). As a further motivation, we discuss three Computer Science examples of semigroups, given by the domain of words ([Smy91]), the complexity spaces ([Sch95],[RS99]) and the interval domain ([EEP97]). An extension of the correspondence theorem of [Sch00] to the context of semigroups is obtained.


The Journal of Logic and Algebraic Programming | 2010

MOQA; unlocking the potential of compositional static average-case analysis

Michel P. Schellekens

Abstract Compositionality is the “golden key” to static analysis and plays a central role in static worst-case time analysis. We show that compositionality, combined with the capacity for tracking data distributions, unlocks a useful novel technique for average-case analysis. The applicability of the technique has been demonstrated via the static average-case analysis tool DISTRI. The tool automatically extracts average-case time from source code of programs implemented in the novel programming language MOQA 1 . MOQA enables the prediction of the average number of basic steps performed in a computation, paving the way for static analysis of complexity measures such as average time or average power use. MOQA has as a unique feature a guaranteed average-case timing compositionality. The compositionality property brings a strong advantage for the programmer. The capacity to combine parts of code, where the average-time is simply the sum of the times of the parts, is a very helpful advantage in static analysis, something which is not available in current languages. Moreover, re-use is a key factor in the MOQA approach: once the average time is determined for a piece of code, then this time will hold in any context. Hence it can be re-used and the timing impact is always the same. Compositionality also improves precision of static average-case analysis, supporting the determination of accurate estimates on the average number of basic operations of MOQA programs. The MOQA “language” essentially consists of a suite of data-structuring operations together with conditionals, for-loops and recursion. As such MOQA can be incorporated in any traditional programming language, importing all of its benefits in a familiar context 2 . Compositionality for average-case is subtle and one may easily be tempted to conclude that compositionality “comes for free”. For genuine compositional reasoning however, one needs to be able to track data and their distribution throughout computations; a non-trivial problem. The lack of an efficient method to track distributions has plagued all prior static average-case analysis approaches. We show how MOQA enables the finitary representation and tracking of the distribution of data states throughout computations. This enables one to unlock the true potential of compositional reasoning. Links with reversible computing are discussed. The highly visual aspect of this novel and unified approach to the Analysis of Algorithms also has a pedagogical advantage, providing students with useful insights in the nature of algorithms and their analysis.


Quaestiones Mathematicae | 2000

The quasi-metric of complexity convergence

Salvador Romaguera; Michel P. Schellekens

For any weightable quasi-metric space (X, d) having a maximum with respect to the associated order ≤ d , the notion of the quasi-metric of complexity convergence on the the function space (equivalently, the space of sequences) Xω , is introduced and studied. We observe that its induced quasi-uniformity is finer than the quasi-uniformity of pointwise convergence and weaker than the quasi-uniformity of uniform convergence. We show that it coincides with the quasi-uniformity of pointwise convergence if and only if the quasi-metric space (X, d) is bounded and it coincides with the quasi-uniformity of uniform convergence if and only if X is a singleton. We also investigate completeness of the quasi-metric of complexity convergence. Finally, we obtain versions of the celebrated Grothendieck theorem in this context.


Theory of Computing Systems \/ Mathematical Systems Theory | 2012

The Baire Partial Quasi-Metric Space: A Mathematical Tool for Asymptotic Complexity Analysis in Computer Science

M. A. Cerda-Uguet; Michel P. Schellekens; Oscar Valero

In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program verification (Ann. N.Y. Acad. Sci. 728:183–197, 1994). He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated Banach fixed point theorem (Theor. Comput. Sci. 151:195–205, 1995). Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a topological foundation for the asymptotic complexity analysis of programs and algorithms (Electron. Notes Theor. Comput. Sci. 1:211–232, 1995). The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the well-known fact that Mergesort algorithm has optimal asymptotic average running time of computing. In this paper, motivated by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition inherent to the definition of the complexity space in the Schellekens framework. Finally, in order to illustrate and to validate the developed theory we apply our results to analyze the asymptotic complexity of Quicksort, Mergesort and Largesort algorithms. Concretely we retrieve through our new approach the well-known facts that the running time of computing of Quicksort (worst case behaviour), Mergesort and Largesort (average case behaviour) are in the complexity classes


International Journal of Computer Mathematics | 2011

The complexity space of partial functions: a connection between complexity analysis and denotational semantics

Salvador Romaguera; Michel P. Schellekens; Oscar Valero

\mathcal{O}(n^{2})

Collaboration


Dive into the Michel P. Schellekens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Salvador Romaguera

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ka Lok Man

University College Cork

View shared research outputs
Top Co-Authors

Avatar

Jiaoyan Chen

University College Cork

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. L. Man

University College Cork

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge