Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vladimir Getov is active.

Publication


Featured researches published by Vladimir Getov.


Concurrency and Computation: Practice and Experience | 2000

MPJ: MPI‐like message passing for Java

Bryan Carpenter; Vladimir Getov; Glenn Judd; Anthony Skjellum; Geoffrey C. Fox

Recently, there has been a lot of interest in using Java for parallel programming. Efforts have been hindered by lack of standard Java parallel programming APIs. To alleviate this problem, various groups started projects to develop Java message passing systems modelled on the successful Message Passing Interface (MPI). Official MPI bindings are currently defined only for C, Fortran, and C++, so early MPI-like environments for Java have been divergent. This paper relates an effort undertaken by a working group of the Java Grande Forum, seeking a consensus on an MPI-like API, to enhance the viability of parallel programming using Java.


Concurrency and Computation: Practice and Experience | 1998

High‐performance parallel programming in Java: exploiting native libraries

Vladimir Getov; Susan Flynn Hummel; Sava Mintchev

With most of todays fast scientific software written in Fortran and C, Java has a lot of catching up to do. In this paper we discuss how new Java programs can capitalize on high-performance libraries for other languages. With the help of a tool we have automatically created Java bindings for several standard libraries: MPI, BLAS, BLACS, PBLAS and ScaLAPACK. The purpose of the additional software layer introduced by the bindings is to resolve the interface problems between different programming languages such as data type mapping, pointers, multidimensional arrays, etc. For evaluation, performance results are presented for Java versions of two benchmarks from the NPB and PARKBENCH suites on the IBM SP2 using JDK and IBMs high-performance Java compiler, and on the Fujitsu AP3000 using Toba - a Java-to-C translator. The results confirm that fast parallel computing in Java is indeed possible.


Annales Des Télécommunications | 2009

GCM: a grid extension to Fractal for autonomous distributed components

Françoise Baude; Denis Caromel; Cédric Dalmasso; Marco Danelutto; Vladimir Getov; Ludovic Henrio; Christian Pérez

This article presents an extension of the Fractal component model targeted at programming applications to be run on computing grids: the grid component model (GCM). First, to address the problem of deployment of components on the grid, deployment strategies have been defined. Then, as grid applications often result from the composition of a lot of parallel (sometimes identical) components, composition mechanisms to support collective communications on a set of components are introduced. Finally, because of the constantly evolving environment and requirements for grid applications, the GCM defines a set of features intended to support component autonomicity. All these aspects are developed in this paper with the challenging objective to ease the programming of grid applications, while allowing GCM components to also be the unit of deployment and management.


Communications of The ACM | 2001

Multiparadigm communications in Java for grid computing

Vladimir Getov; Gregor von Laszewski; Michael Philippsen; Ian T. Foster

In this article, we argue that the rapid development of Java technology now makes it possible to support, in a single object-oriented framework, the different communication and coordination structures that arise in scientific applications. We outline how this integrated approach can be achieved, reviewing in the process the state-of-the-art in communication paradigms within Java. We also present recent evaluation results indicating that this integrated approach can be achieved without compromising on performance.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1997

Towards Portable Message Passing in Java: Binding MPI

Sava Mintchev; Vladimir Getov

In this paper we present a way of successfully tackling the difficulties of binding MPI to Java with a view to ensuring portability. We have created a tool for automatically binding existing native C libraries to Java, and have applied the Java-to-C Interface generating tool (JCI) to bind MPI to Java. The approach of automatic binding by JCI ensures both portability across different platforms and full compatibility with the MPI specification. To evaluate the resulting combination we have run a Java version of the NAS parallel IS benchmark on a distributed-memory IBM SP2 machine.


Proceedings of the ACM 1999 conference on Java Grande | 1999

Design issues for efficient implementation of MPI in Java

Glenn Judd; Mark J. Clement; Quinn Snell; Vladimir Getov

While there is growing interest in using Java for high-performance applications, many in the highperformance computing community do not believe that Java can match the performance of traditional native message passing environments. This paper discusses critical issues that must be addressed in the design of Java based message passing systems. Efficient handling of these issues allows Java-MPI applications to obtain performance which rivals that of traditional native message passing systems. To illustrate these concepts, the design and performance of a pure Java implementation of MPI are discussed.


Archive | 2004

Performance analysis and grid computing

Vladimir Getov; Michael Gerndt; Adolfy Hoisie; Allen D. Malony; Barton P. Miller

Parallel computing is a promising approach that provides more powerful computing capabilities for many scientific research fields to solve new problems. However, to take advantage of such capabilities it is necessary to ensure that the applications are successfully designed and that their performance is satisfactory. This implies that the task of the application designer does not finish when the application is free of functional bugs, and that it is necessary to carry out some performance analysis and application tuning to reach the expected performance. This application tuning requires a performance analysis, including the detection of performance bottlenecks, the identification of their causes and the modification of the application to improve behavior. These tasks require a high degree of expertise and are usually time consuming. Therefore, tools that automate some of these tasks are useful, especially for non-expert users. In this paper, we present three tools that cover different approaches to automatic performance analysis and tuning. In the first approach, we apply static automatic performance analysis. The second is based on run-time automatic analysis. The last approach sets out dynamic automatic performance tuning.


conference on high performance computing (supercomputing) | 1999

MPI and Java-MPI: Contrasts and Comparisons of Low-Level Communication Performance

Vladimir Getov; Paul A. Gray; Vaidy S. Sunderam

Java is receiving increasing attention as the most popular platform for distributed and collaborative computing. However, it is still subject to significant performance drawbacks in comparison to other programming languages such as C and Fortran. This paper represents the current status of our ongoing project which intends to conduct a detailed experimental evaluation on the suitability of Java in these environments, with particular focus on its message-passing performance for one-to-one as well as one-to-many and many-to- many data exchange patterns. We also emphasize both methodology and evaluation guidelines in order to ensure reproducibility, sound interpretation, and comparative analysis of performance results. Some of the important parameters which characterize the communication performance of MPI and Java-MPI such as latency, asymptotic bandwidth and N-half are investigated. In addition, we introduce two different types of pipeline effects - intra-message and inter-message - that have significant influence on the message-passing performance. For this purpose we have developed a low-level message-passing benchmark suite, which we have used to evaluate and compare different message-passing environments on the IBM SP-2.


IEEE Computer | 2011

Navigating the Cloud Computing Landscape Technologies, Services, and Adopters

Savitha Srinivasan; Vladimir Getov

Cloud computing represents a fundamental shift in the delivery of information technology services that has permanently changed the computing landscape.


IEEE Computer | 2009

Extreme-Scale ComputingWhere 'Just More of the Same' Does Not Work

Adolfy Hoisie; Vladimir Getov

In addition to enabling science through simulations at unprecedented size and fidelity, extreme-scale computing serves as an incubator of scientific and technological ideas for the computing area in general.

Collaboration


Dive into the Vladimir Getov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adolfy Hoisie

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Artie Basukoski

University of Westminster

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophia Corsava

University of Westminster

View shared research outputs
Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge