Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas T. Karonis is active.

Publication


Featured researches published by Nicholas T. Karonis.


job scheduling strategies for parallel processing | 1998

A Resource Management Architecture for Metacomputing Systems

Karl Czajkowski; Ian T. Foster; Nicholas T. Karonis; Carl Kesselman; Stuart Martin; Warren Smith; Steven Tuecke

Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.


conference on high performance computing (supercomputing) | 2001

Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus

Gabrielle Allen; Thomas Dramlitsch; Ian T. Foster; Nicholas T. Karonis; Matei Ripeanu; Edward Seidel; Brian R. Toonen

Improvements in the performance of processors and networks make it both feasible and interesting to treat collections of workstations, servers, clusters, and supercomputers as integrated computational resources, or Grids. However, the highly heterogeneous and dynamic nature of such Grids can make application development di.cult. Here we describe an architecture and prototype implementation for a Grid-enabled computational framework based on Cactus, the MPICH-G2 Grid-enabled message-passing library, and a variety of specialized features to support e.cient execution in Grid environments. We have used this framework to perform record-setting computations in numerical relativity, running across four supercomputers and achieving scaling of 88% (1140 CPU’s) and 63% (1500 CPUs). The problem size we were able to compute was about five times larger than any other previous run. Further, we introduce and demonstrate adaptive methods that automatically adjust computational parameters during run time, to increase dramatically the efficiency of a distributed Grid simulation, without modification of the application and without any knowledge of the underlying network connecting the distributed computers.


parallel computing | 1998

Wide-area implementation of the message passing interface

Ian T. Foster; Jonathan Geisler; William Gropp; Nicholas T. Karonis; Ewing L. Lusk; George K. Thiruvathukal; Steven Tuecke

The Message Passing Interface (MPI) can be used as a portable, high-performance programming model for wide-area computing systems. The wide-area environment introduces challenging problems for the MPI implementor, due to the heterogeneity of both the underlying physical infrastructure and the software environment at different sites. In this article, we describe an MPI implementation that incorporates solutions to these problems. This implementation has been constructed by extending the Argonne MPICH implementation of MPI to use communication services provided by the Nexus communication library and authentication, resource allocation, process creation/management, and information services provided by the I-Soft system (initially) and the Globus metacomputing toolkit (work in progress). Nexus provides multimethod communication mechanisms that allow multiple communication methods to be used in a single computation with a uniform interface; I-Soft and Globus provided standard authentication, resource management, and process management mechanisms. We describe how these various mechanisms are supported in the Nexus implementation of MPI and present performance results for this implementation on multicomputers and networked systems. We also discuss how more advanced services provided by the Globus metacomputing toolkit are being used to construct a second-generation wide-area MPI.


high performance distributed computing | 1997

A secure communications infrastructure for high-performance distributed computing

Ian T. Foster; Nicholas T. Karonis; Carl Kesselman; Greg Koenig; Steven Tuecke

Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentiality of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. We address these requirements via a security-enhanced version of the Nexus communication library, which we use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication allowing the programmer to make fine-grained security/performance tradeoffs. We present performance results that quantify the performance of our infrastructure.


challenges of large applications in distributed environments | 2006

NEKTAR, SPICE and Vortonics: using federated grids for large scale scientific applications

Bruce M. Boghosian; Peter V. Coveney; Suchuan Dong; Lucas Finn; Shantenu Jha; George Em Karniadakis; Nicholas T. Karonis

In response to a joint call from USs NSF and UKs EPSRC for applications that aim to utilize the combined computational resources of the US and UK, three computational science groups from UCL, Tufts and Brown Universities teamed up with a middleware team from NIU/Argonne to meet the challenge. Although the groups had three distinct codes and aims, the projects had the underlying common feature that they were comprised of large-scale distributed applications which required high-end networking and advanced middleware in order to be effectively deployed. For example, cross-site runs were found to be a very effective strategy to overcome the limitations of a single resource. The seamless federation of a grid-of-grids remains difficult. Even if interoperability at the middleware and software stack levels were to exist, it would not guarantee that the federated grids can be utilized for large scale distributed applications. There are important additional requirements for example, compatible and consistent usage policy, automated advanced reservations and most important of all co-scheduling. This paper outlines the scientific motivation and describes why distributed resources are critical for all three projects. It documents the challenges encountered in using a grid-of-grids and some of the solutions devised in response.


conference on high performance computing (supercomputing) | 2000

MPICH-GQ: Quality-of-Service for Message Passing Programs

Alain Roy; Ian T. Foster; William Gropp; Brian R. Toonen; Nicholas T. Karonis; Volker Sander

Parallel programmers typically assume that all resources required for a program’s execution are dedicated to that purpose. However, in local and wide area networks, contention for shared networks, CPUs, and I/O systems can result in significant variations in availability, with consequent adverse effects on overall performance. We describe a new message-passing architecture, MPICH-GQ, that uses quality of service (QoS) mechanisms to manage contention and hence improve performance of message passing interface (MPI) applications. MPICH-GQ combines new QoS specification, traffic shaping, QoS reservation, and QoS implementation techniques to deliver QoS capabilities to the high-bandwidth bursty flows, complex structures, and reliable protocols used in high-performance applications-characteristics very different from the low-bandwidth, constant bit-rate media flows and unreliable protocols for which QoS mechanisms were designed. Results obtained on a differentiated services testbed demonstrate our ability to maintain application performance in the face of heavy network contention.


Cluster Computing | 1998

Managing security in high-performance distributed computations

Ian T. Foster; Nicholas T. Karonis; Carl Kesselman; Steven Tuecke

We describe a software infrastructure designed to support the development of applications that use high‐speed networks to connect geographically distributed supercomputers, databases, and scientific instruments. Such applications may need to operate over open networks and access valuable resources, and hence can require mechanisms for ensuring integrity and confidentiality of communications and for authenticating both users and resources. Yet security solutions developed for traditional client‐server applications do not provide direct support for the distinctive program structures, programming tools, and performance requirements encountered in these applications. To address these requirements, we are developing a security‐enhanced version of a communication library called Nexus, which is then used to provide secure versions of various parallel libraries and languages, including the popular Message Passing Interface. These tools support the wide range of process creation mechanisms and communication structures used in high‐performance computing. They also provide a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine‐grained security/performance tradeoffs. We present performance results that enable us to quantify the performance of our infrastructure.


Future Generation Computer Systems | 2003

High-resolution remote rendering of large datasets in a collaborative environment

Nicholas T. Karonis; Michael E. Papka; Justin Binns; John Bresnahan; Joseph A. Insley; David Jones; Joseph M. Link

In a time when computational and data resources are distributed around the globe, users need to interact with these resources and each other easily and efficient. The Grid, by definition, represents a connection of distributed resources that can be used regardless of the users location. We have built a prototype visualization system using the Globus Toolkit, MPICH-G2, and the Access Grid in order to explore how future scientific collaborations may occur over the Grid. We describe our experience in demonstrating our system at iGrid2002, where the United States and the Netherlands were connected via a high-latency, high-bandwidth network. In particular, we focus on issues related to a Grid-based application that couples a collaboration component (including a user interface to the Access Grid) with a high-resolution remote rendering component.


Computing in Science and Engineering | 2005

Cross-site computations on the TeraGrid

Suchuan Dong; G.E. Karniadakes; Nicholas T. Karonis

The TeraGrids collective computing resources can help researchers perform very-large-scale simulations in computational fluid dynamics (CFD) applications, but doing so requires tightly coupled communications among different sites. The authors examine a scaled-down turbulent flow problem, investigating the feasibility and scalability of cross-site simulation paradigms, targeting grand challenges such as blood flow in the entire human arterial tree.


high performance distributed computing | 1999

Accurately measuring MPI broadcasts in a computational grid

B.R. de Supinski; Nicholas T. Karonis

An MPI librarys implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies along different links can vary significantly. Thus, an algorithms performance is difficult to predict from its communication pattern. Even when accurate prediction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies.

Collaboration


Dive into the Nicholas T. Karonis's collaboration.

Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kirk L. Duffin

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Brian R. Toonen

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Coutrakon

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Erdelyi

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Caesar E. Ordonez

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge