Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sabri Pllana is active.

Publication


Featured researches published by Sabri Pllana.


Concurrency and Computation: Practice and Experience | 2005

ASKALON: a tool set for cluster and Grid computing

Thomas Fahringer; Alexandru Jugravu; Sabri Pllana; Radu Prodan; Clovis Seragiotto; Hong Linh Truong

Performance engineering of parallel and distributed applications is a complex task that iterates through various phases, ranging from modeling and prediction, to performance measurement, experiment management, data collection, and bottleneck analysis. There is no evidence so far that all of these phases should/can be integrated into a single monolithic tool. Moreover, the emergence of computational Grids as a common single wide‐area platform for high‐performance computing raises the idea to provide tools as interacting Grid services that share resources, support interoperability among different users and tools, and, most importantly, provide omnipresent services over the Grid. We have developed the ASKALON tool set to support performance‐oriented development of parallel and distributed (Grid) applications. ASKALON comprises four tools, coherently integrated into a service‐oriented architecture. SCALEA is a performance instrumentation, measurement, and analysis tool of parallel and distributed applications. ZENTURIO is a general purpose experiment management tool with advanced support for multi‐experiment performance analysis and parameter studies. AKSUM provides semi‐automatic high‐level performance bottleneck detection through a special‐purpose performance property specification language. The PerformanceProphet enables the user to model and predict the performance of parallel applications at the early stages of development. In this paper we describe the overall architecture of the ASKALON tool set and outline the basic functionality of the four constituent tools. The structure of each tool is based on the composition and sharing of remote Grid services, thus enabling tool interoperability. In addition, a data repository allows the tools to share the common application performance and output data that have been derived by the individual tools. A service repository is used to store common portable Grid service implementations. A general‐purpose Factory service is employed to create service instances on arbitrary remote Grid sites. Discovering and dynamically binding to existing remote services is achieved through registry services. The ASKALON visualization diagrams support both online and post‐mortem visualization of performance and output data. We demonstrate the usefulness and effectiveness of ASKALON by applying the tools to real‐world applications. Copyright


international symposium on microarchitecture | 2011

PEPPHER: Efficient and Productive Usage of Hybrid Computing Systems

Siegfried Benkner; Sabri Pllana; Jesper Larsson Träff; Philippas Tsigas; Uwe Dolinsky; Cédric Augonnet; Beverly Bachmayer; Christoph W. Kessler; David Moloney; Vitaly Osipov

PEPPHER, a three-year European FP7 project, addresses efficient utilization of hybrid (heterogeneous) computer systems consisting of multicore CPUs with GPU-type accelerators. This article outlines the PEPPHER performance-aware component model, performance prediction means, runtime system, and other aspects of the project. A larger example demonstrates performance portability with the PEPPHER approach across hybrid systems with one to four GPUs.


winter simulation conference | 2002

UML based modeling of performance oriented parallel and distributed applications

Sabri Pllana; Thomas Fahringer

We introduce a novel approach for modeling performance oriented distributed and parallel applications based on the Unified Modeling Language (UML). We utilize the UML extension mechanisms to customize UML for the domain of performance oriented distributed and parallel computing. A set of UML building blocks is described that model some of the most important constructs of message passing and shared memory parallel paradigms which can be used to develop models for large and complex parallel and distributed applications. We illustrate our approach by modeling a parallel many-body physics application that combines message passing and shared memory parallelism.


Lecture Notes in Computer Science | 2002

On Customizing the UML for Modeling Performance-Oriented Applications

Sabri Pllana; Thomas Fahringer

Modeling of parallel and distributed applications was a preoccupation of numerous research groups in the past. The increasing importance of applications that mix shared memory parallelism with message passing has complicated the modeling effort. Despite the fact that UML represents the de-facto standard modeling language, little work has been done to investigate whether UML can be employed to model performance-oriented parallel and distributed applications. This paper provides a critical look at the utility of UML to model shared memory and message passing applications by employing the UML extension mechanisms. The basic idea is to developU ML building blocks for the most important sequential, shared memory, and message passing constructs. These building blocks can be enriched with additional information, for instance, performance and control flow data. Subsequently, building blocks are combined to represent basically arbitrary complex applications. We will further describe how to model the mapping of applications onto process topologies.


international conference on computational science | 2004

A-GWL: Abstract Grid Workflow Language

Thomas Fahringer; Sabri Pllana; Alex Villazón

Grid workflow applications are emerging as one of the most interesting application classes for the Grid. In this paper we present A-GWL, a novel Grid workflow language to describe the workflow of Grid applications at a high level of abstraction. A-GWL has been designed to allow the user to concentrate on describing scientific Grid applications. The user is shielded from details of the underlying Grid infrastructure. A-GWL is XML-based which defines a graph of activities that refers to computational tasks or user interactions. Activities are connected by control- and data-flow links. We have defined A-GWL to support the user in orchestrating Grid workflow applications through a rich set of constructs including sequence of activities, sub-activities, control-flow mechanisms (sequential flow, exclusive choice, and sequential loops), data-flow mechanisms (input/output ports), and data repositories. Moreover, our work differs from most existing Grid workflow languages by advanced workflow constructs such as parallel execution of activities with pre- and post-conditions, parallel loops, event-based synchronization mechanisms, and property-based selection of activities. In addition, the user can specify high-level constraints and properties for activities and data-flow links.


design, automation, and test in europe | 2012

Programmability and performance portability aspects of heterogeneous multi-/manycore systems

Christoph W. Kessler; Usman Dastgeer; Samuel Thibault; Raymond Namyst; Andrew Richards; Uwe Dolinsky; Siegfried Benkner; Jesper Larsson Träff; Sabri Pllana

We discuss three complementary approaches that can provide both portability and an increased level of abstraction for the programming of heterogeneous multicore systems. Together, these approaches also support performance portability, as currently investigated in the EU FP7 project PEPPHER. In particular, we consider (1) a library-based approach, here represented by the integration of the SkePU C++ skeleton programming library with the StarPU runtime system for dynamic scheduling and dynamic selection of suitable execution units for parallel tasks; (2) a language-based approach, here represented by the Offload-C++ high-level language extensions and Offload compiler to generate platform-specific code; and (3) a component-based approach, specifically the PEPPHER component system for annotating user-level application components with performance metadata, thereby preparing them for performance-aware composition. We discuss the strengths and weaknesses of these approaches and show how they could complement each other in an integrational programming framework for heterogeneous multicore systems.


european conference on parallel processing | 2009

Towards an Intelligent Environment for Programming Multi-core Computing Systems

Sabri Pllana; Siegfried Benkner; Eduard Mehofer; Lasse Natvig; Fatos Xhafa

In this position paper we argue that an intelligent program development environment that proactively supports the user helps a mainstream programmer to overcome the difficulties of programming multi-core computing systems. We propose a programming environment based on intelligent software agents that enables users to work at a high level of abstraction while automating low-level implementation activities. The programming environment supports program composition in a model-driven development fashion using parallel building blocks and proactively assists the user during major phases of program development and performance tuning. We highlight the potential benefits of using such a programming environment with usage-scenarios. An experiment with a parallel building block on a Sun UltraSPARC T2 Plus processor shows how the system may assist the programmer in achieving performance improvements.


international conference on parallel processing | 2005

Performance Prophet: a performance modeling and prediction tool for parallel and distributed programs

Sabri Pllana; Thomas Fahringer

High-performance computing is essential for solving large problems and for reducing the time to solution for a single problem. Current top high-performance computing systems contain 1000s of processors. Therefore, new tools are needed to support the program development that will exploit high degrees of parallelism. The issue of model-based performance evaluation of real world programs on large scale systems is addressed in this paper. We present the Performance Prophet, which is a performance modeling and prediction tool for parallel and distributed programs. One of the main contributions of this paper is our methodology for reducing the time needed to evaluate the model. In addition, we describe our method for automatic performance model generation. We have implemented Performance Prophet in Java and C++. We illustrate our approach by modeling and simulating a real-world material science parallel program.


parallel computing | 2012

Using explicit platform descriptions to support programming of heterogeneous many-core systems

Martin Sandrieser; Siegfried Benkner; Sabri Pllana

Heterogeneous many-core systems constitute a viable approach for coping with power constraints in modern computer architectures and can now be found across the whole computing landscape ranging from mobile devices, to desktop systems and servers, all the way to high-end supercomputers and large-scale data centers. While these systems promise to offer superior performance-power ratios, programming heterogeneous many-core architectures efficiently has been shown to be notoriously difficult. Programmers typically are forced to take into account a plethora of low-level architectural details and usually have to resort to a combination of different programming models within a single application. In this paper we propose a platform description language (PDL) that enables to capture key architectural patterns of commonly used heterogeneous computing systems. PDL architecture descriptions support both programmers and toolchains by providing platform-specific information in a well-defined and explicit manner. We have developed a prototype source-to-source compilation framework that utilizes PDL descriptors to transform sequential task-based programs with source code annotations into a form that is convenient for execution on heterogeneous many-core systems. Our framework relies on a component-based approach that accommodates for different implementation variants of tasks, customized for different parts of a heterogeneous platform, and utilizes an advanced runtime system for exploiting parallelism through dynamic task scheduling. We show various usage scenarios of our PDL and demonstrate the effectiveness of our framework for a commonly used scientific kernel and a financial application on different configurations of a state-of-the-art CPU/GPU system.


complex, intelligent and software intensive systems | 2007

Performance Modeling and Prediction of Parallel and Distributed Computing Systems: A Survey of the State of the Art

Sabri Pllana; Ivona Brandic; Siegfried Benkner

Performance is one of the key features of parallel and distributed computing systems. Therefore, in the past a significant research effort was invested in the development of approaches for performance modeling and prediction of parallel and distributed computing systems. In this paper we identify the trends, contributions, and drawbacks of the state of the art approaches. We describe a wide range of the performance modeling approaches that spans from the high-level mathematical modeling to the detailed instruction-level simulation. For each approach we describe how the program and machine are modeled and estimate the model development and evaluation effort, the efficiency, and the accuracy. Furthermore, we present an overall evaluation of the presented approaches

Collaboration


Dive into the Sabri Pllana's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatos Xhafa

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivona Brandic

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonard Barolli

Louisiana State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge