Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Danelutto is active.

Publication


Featured researches published by Marco Danelutto.


Future Generation Computer Systems | 2003

An advanced environment supporting structured parallel programming in Java

Marco Aldinucci; Marco Danelutto; P. Teti

In this work we present Lithium, a pure Java structured parallel programming environment based on skeletons (common, reusable and efficient parallelism exploitation patterns). Lithium is implemented as a Java package and represents both the first skeleton based programming environment in Java and the first complete skeleton based Java environment exploiting macro-data flow implementation techniques.Lithium supports a set of user code optimizations which are based on skeleton rewriting techniques. These optimizations improve both absolute performance and resource usage with respect to original user code. Parallel programs developed using the library run on any network of workstations provided the workstations support plain JRE. The paper describes the library implementation, outlines the optimization techniques used and eventually presents the performance results obtained on both synthetic and real applications.


Annales Des Télécommunications | 2009

GCM: a grid extension to Fractal for autonomous distributed components

Françoise Baude; Denis Caromel; Cédric Dalmasso; Marco Danelutto; Vladimir Getov; Ludovic Henrio; Christian Pérez

This article presents an extension of the Fractal component model targeted at programming applications to be run on computing grids: the grid component model (GCM). First, to address the problem of deployment of components on the grid, deployment strategies have been defined. Then, as grid applications often result from the composition of a lot of parallel (sometimes identical) components, composition mechanisms to support collective communications on a set of components are introduced. Finally, because of the constantly evolving environment and requirements for grid applications, the GCM defines a set of features intended to support component autonomicity. All these aspects are developed in this paper with the challenging objective to ease the programming of grid applications, while allowing GCM components to also be the unit of deployment and management.


Future Generation Computer Systems | 1992

A methodology for the development and the support of massively parallel programs

Marco Danelutto; Roberto Di Meglio; Salvatore Orlando; Susanna Pelagatti; Marco Vanneschi

Abstract The most important features that a parallel programming language should provide are portability, modularity , and ease of use , as well as performance and efficiency . Current parallel languages are only characterized by some of these features. For instance, most of these languages allow programmers to efficiently exploit the massively parallel target machine. Unfortunately, the estimation of the performance of each application is usually made by the programmer, without the support of any tool. Moreover, the programs produced by using such languages are not portable or easily modifiable. Here, we present a methodology to easily write efficient, high performance and portable massively parallel programs. The methodology is based on the definition of a new explicitly parallel programming language, namely P 3 L , and of a set of compiling tools that perform automatic adaptation of the program features to the target architecture hardware. Target architectures taken into account here are general purpose, distributed memory, MIMD architectures. These architectures provide the scalability and low cost features that are necessary to tackle the goal of massively parallel computing. Following the P 3 L methodology, the programmer has just to specify the kind of parallelism he is going to exploit (pipeline, farm, data, etc.) in the parallel application. Then, P 3 L programming tools automatically generate the process network that implements and optimizes, for the given target architecture, the particular kind of parallelism the programmer indicated as the most suitable for the application.


parallel computing | 1999

SkIE: a heterogeneous environment for HPC applications

Bruno Bacci; Marco Danelutto; Susanna Pelagatti; Marco Vanneschi

Abstract Technological directions for innovative HPC software environments are discussed in this paper. We focus on industrial user requirements of heterogeneous multidisciplinary applications, performance portability, rapid prototyping and software reuse, integration and interoperability of standard tools. The various issues are demonstrated with reference to the PQE2000project and its programming environment Skeleton-based Integrated Environment ( SkIE ). SkIE includes a coordination language, SkIECL , allowing the designers to express, in a primitive and structured way, efficient combinations of data parallelism and task parallelism. The goal is achieving fast development and good efficiency for applications in different areas. Modules developed with standard languages and tools are encapsulated into SkIECL structures to form the global application. Performance models associated to the coordination language allow powerful optimizations to be introduced both at run time and at compile time without the direct intervention of the programmer. The paper also discusses the features of the SkIE environment related to debugging, performance analysis tools, visualization and graphical user interface. A discussion of the results achieved in some applications developed using the environment concludes the paper.


grid computing | 2006

ASSIST as a research framework for high-performance grid programming environments

Marco Aldinucci; Massimo Coppola; Marco Vanneschi; Corrado Zoccolo; Marco Danelutto

The research activity of our group at the Department of Computer Science, University of Pisa, is focused on programming models and environments for the development of high-performance multidisciplinary applications. The enabling computing platforms we are considering are complex distributed architectures, whose nodes are parallel machines of any kind, including PC/workstation clusters. In general such platforms are characterized by heterogeneity of nodes, and by dynamicity in resource management and allocation. In this context, Grid platforms at various levels of integration [25] are of main interest, including complex distributed structures of general and dedicated subsystems, private heterogeneous networks, and systems for pervasive and ubiquitous computing. In the following, we shall speak of Grids to refer to such architectural scenario.


international conference on parallel processing | 2011

Accelerating code on multi-cores with fastflow

Marco Aldinucci; Marco Danelutto; Peter Kilpatrick; Massimiliano Meneghin; Massimo Torquati

FastFlow is a programming framework specifically targeting cache-coherent shared-memory multi-cores. It is implemented as a stack of C++ template libraries built on top of lock-free (and memory fence free) synchronization mechanisms. Its philosophy is to combine programmability with performance. In this paper a new FastFlow programming methodology aimed at supporting parallelization of existing sequential code via offloading onto a dynamically created software accelerator is presented. The new methodology has been validated using a set of simple micro-benchmarks and some real applications.


european conference on parallel processing | 1997

Skeletons for Data Parallelism in p3l

Marco Danelutto; Fabrizio Pasqualetti; Susanna Pelagatti

This paper addresses the application of a skeleton/template compiling strategy to structured data parallel computations. In particular, we discuss how data parallelism is expressed and implemented in p31, a structured parallel language based on skeletons. In the paper, we describe the new set of p31 data parallel skeletons, outline the implementation templates used to compile data parallel computations, and discuss the template based compiling process and the optimizations that can be carried on. Finally, we give some preliminary implementation results.


international conference on parallel processing | 2012

An efficient unbounded lock-free queue for multi-core systems

Marco Aldinucci; Marco Danelutto; Peter Kilpatrick; Massimiliano Meneghin; Massimo Torquati

The use of efficient synchronization mechanisms is crucial for implementing fine grained parallel programs on modern shared cache multi-core architectures. In this paper we study this problem by considering Single-Producer/Single-Consumer (SPSC) coordination using unbounded queues. A novel unbounded SPSC algorithm capable of reducing the row synchronization latency and speeding up Producer-Consumer coordination is presented. The algorithm has been extensively tested on a shared-cache multi-core platform and a sketch proof of correctness is presented. The queues proposed have been used as basic building blocks to implement the FastFlow parallel framework, which has been demonstrated to offer very good performance for fine-grain parallel applications.


the Intl. Workshop on Component Models and Systems for Grid Applications | 2005

Components for High-Performance Grid Programming in Grid.IT

Marco Aldinucci; Sonia Campa; Massimo Coppola; Marco Danelutto; Domenico Laforenza; Diego Puppin; Luca Scarponi; Marco Vanneschi; Corrado Zoccolo

This chapter presents the main ideas of the high-performance component-based Grid programming environment of the Grid.it project. High-performance components are characterized by a programming model that integrates the concepts of structured parallelism, component interaction, compositionality, and adaptivity. We show that ASSIST, the prototype of parallel programming environment currently under development at our group, is a suitable basis to capture all the desired features of the component model in a flexible and efficient manner. For the sake of interoperability, ASSIST modules or programs are automatically encapsulated in standard frameworks; currently, we are experimenting Web Services and the CORBA Component Model. Grid applications, built as compositions of ASSIST components and possibly other existing (legacy) components, are supported by an innovative Grid Abstract Machine, that includes essential abstractions of standard middleware services and a hierarchical Application Manager (AM). AM supports static allocation and dynamic reallocation of adaptive applications according to a performance contract, a reconfiguration strategy, and a performance model.


formal methods | 2013

The ParaPhrase Project: Parallel patterns for adaptive heterogeneous multicore systems

Kevin Hammond; Marco Aldinucci; Christopher Brown; Francesco Cesarini; Marco Danelutto; Horacio González-Vélez; Peter Kilpatrick; Rainer Keller; Michael Rossbory; Gilad Shainer

This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism.

Collaboration


Dive into the Marco Danelutto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Kilpatrick

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Domenico Laforenza

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Researchain Logo
Decentralizing Knowledge