Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Arthur Kohl is active.

Publication


Featured researches published by James Arthur Kohl.


ieee international conference on high performance computing data and analytics | 2006

A Component Architecture for High-Performance Scientific Computing

Benjamin A. Allan; Robert C. Armstrong; David E. Bernholdt; Felipe Bertrand; Kenneth Chiu; Tamara L. Dahlgren; Kostadin Damevski; Wael R. Elwasif; Thomas Epperly; Madhusudhan Govindaraju; Daniel S. Katz; James Arthur Kohl; Manoj Kumar Krishnan; Gary Kumfert; J. Walter Larson; Sophia Lefantzi; Michael J. Lewis; Allen D. Malony; Lois C. Mclnnes; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Sameer Shende; Theresa L. Windus; Shujia Zhou

The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry.


Concurrency and Computation: Practice and Experience | 2002

The CCA core specification in a distributed memory SPMD framework

Benjamin A. Allan; Robert C. Armstrong; Alicia P. Wolfe; Jaideep Ray; David E. Bernholdt; James Arthur Kohl

We present an overview of the Common Component Architecture (CCA) core specification and CCAFFEINE, a Sandia National Laboratories framework implementation compliant with the draft specification. CCAFFEINE stands for CCA Fast Framework Example In Need of Everything; that is, CCAFFEINE is fast, lightweight, and it aims to provide every framework service by using external, portable components instead of integrating all services into a single, heavy framework core. By fast, we mean that the CCAFFEINE glue does not get between components in a way that slows down their interactions. We present the CCAFFEINE solutions to several fundamental problems in the application of component software approaches to the construction of single program multiple data (SPMD) applications. We demonstrate the integration of components from three organizations, two within Sandia and one at Oak Ridge National Laboratory. We outline some requirements for key enabling facilities needed for a successful component approach to SPMD application building. Copyright


Future Generation Computer Systems | 1999

Harness: a next generation distributed virtual machine

Micah Beck; Jack J. Dongarra; Graham E. Fagg; G. Al Geist; Paul A. Gray; James Arthur Kohl; Mauro Migliardi; Keith Moore; Terry Moore; Philip Papadopoulous; Stephen L. Scott; Vaidy S. Sunderam

Abstract Heterogeneous Adaptable Reconfigurable Networked SystemS (HARNESS) is an experimental metacomputing system [L. Smarr, C.E. Catlett, Communications of the ACM 35 (6) (1992) 45–52] built around the services of a highly customizable and reconfigurable Distributed Virtual Machine (DVM). The successful experience of the HARNESS design team with the Parallel Virtual Machine (PVM) project has taught us both the features which make the DVM model so valuable to parallel programmers and the limitations imposed by the PVM design. HARNESS seeks to remove some of those limitations by taking a totally different approach to creating and modifying a DVM.


Journal of Parallel and Distributed Computing | 1990

A Tool to Aid in the Design, Implementation, and Understanding of Matrix Algorithms for Parallel Processors

Jack J. Dongarra; Orlie Brewer; James Arthur Kohl; Samuel A. Fineberg

Abstract This paper discusses a tool that aids in the design, development, and understanding of parallel algorithms for high-performance computers. The tool provides a vehicle for studying memory access patterns, different cache strategies, and the effects of multiprocessors on matrix algorithms in a Fortran setting. Such a tool puts the user in a better position to understand where performance problems may occur and enhances the likelihood of increasing the programs performance before actual execution on a high-performance computer.


ieee visualization | 2003

Visibility culling using plenoptic opacity functions for large volume visualization

Jinzhu Gao; Jian Huang; Han-Wei Shen; James Arthur Kohl

Visibility culling has the potential to accelerate large data visualization in significant ways. Unfortunately, existing algorithms do not scale well when parallelized, and require full re-computation whenever the opacity transfer function is modified. To address these issues, we have designed a Plenoptic Opacity Function (POF) scheme to encode the view-dependent opacity of a volume block. POFs are computed off-line during a pre-processing stage, only once for each block. We show that using POFs is (i) an efficient, conservative and effective way to encode the opacity variations of a volume block for a range of views, (ii) flexible for re-use by a family of opacity transfer functions without the need for additional off-line processing, and (iii) highly scalable for use in massively parallel implementations. Our results confirm the efficacy of POFs for visibility culling in large-scale parallel volume rendering; we can interactively render the Visible Woman dataset using software ray-casting on 32 processors, with interactive modification of the opacity transfer function on-the-fly.


hawaii international conference on system sciences | 1996

The PVM 3.4 tracing facility and XPVM 1.1

James Arthur Kohl; George Al Geist

One of the more bothersome aspects of developing a parallel program is that of monitoring the behavior of the program for debugging and performance tuning. This paper discusses an enhanced tracing facility and tracing tool for PVM (Parallel Virtual Machine), a message passing library for parallel processing in a heterogeneous environment. PVM supports mixed collections of workstation clusters, shared-memory multiprocessors, and MPPs. The upcoming release of PVM, Version 3.4, contains a new and improved tracing facility which provides more flexible and efficient access to run-time program information. This new tracing system supports a buffering mechanism to reduce the perturbation of user applications caused by tracing, and a more flexible trace event definition scheme which is based on a self-defining data format. The new scheme expedites the collection of program execution histories, and allows for integration of user-defined custom trace events. The tracing instrumentation is built into the PVM library, to avoid re-compilation when tracing is desired, and supports on-the-fly adjustments to each tasks trace event mask, for control over the level of tracing detail.


Archive | 2006

Parallel PDE-Based Simulations Using the Common Component Architecture

Lois Curfman McInnes; Benjamin A. Allan; Robert C. Armstrong; Steven J. Benson; David E. Bernholdt; Tamara L. Dahlgren; Lori Freitag Diachin; Manojkumar Krishnan; James Arthur Kohl; J. Walter Larson; Sophia Lefantzi; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Shujia Zhou

The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component- based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general-purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations.


measurement and modeling of computer systems | 1998

Efficient and flexible fault tolerance and migration of scientific simulations using CUMULVS

James Arthur Kohl; Philip M. Papadopoulas

Many practical scientific applications would benefit from a simple checkpointing mechanism to provide automatic restart or recovery in response to faults and failures. CUMULVS is a middleware infrastructure for interacting with parallel scientific simulations to support online visualization and computational steering. The base CUMULVS system has been extended to provide a user-level mechanism for collecting checkpoints in a parallel simulation program. Via the same interface that CUMULVS uses to identify and describe data fields for visualization and parameters for steering, the user application can select the minimal program state necessary to restart or migrate an application task. The CUMULVS run-time system uses this information to efficiently recover fault-tolerant applications by restarting failed tasks. Application tasks can also be migrated -- even across heterogeneous architecture boundaries -- to achieve load balancing or to improve the task`s locality with a required resource. This paper describes the CUMULVS interface for checkpointing, the issues faced in utilizing this interface when developing fault-tolerant and migrating applications, and the direction of future research in this area.


high performance distributed computing | 1998

HARNESS: Heterogeneous Adaptable Reconfigurable NEtworked SystemS

Jack J. Dongarra; Graham E. Fagg; Al Geist; James Arthur Kohl; Philip M. Papadopoulos; Stephen L. Scott; Vaidy S. Sunderam; M. Magliardi

We describe our vision, goals and plans for HARNESS, a distributed, reconfigurable and heterogeneous computing environment that supports dynamically adaptable parallel applications. HARNESS builds on the core concept of the personal virtual machine as an abstraction for distributed parallel programming, but fundamentally extends this idea, greatly enhancing dynamic capabilities. HARNESS is being designed to embrace dynamics at every level through a pluggable model that allows multiple distributed virtual machines (DVMs) to merge, split and interact with each other. It provides mechanisms for new and legacy applications to collaborate with each other using the HARNESS infrastructure, and defines and implements new plug-in interfaces and modules so that applications can dynamically customize their virtual environment. HARNESS fits well within the larger picture of computational grids as a dynamic mechanism to hide the heterogeneity and complexity of the nationally distributed infrastructure. HARNESS DVMs allow programmers and users to construct personal subsets of an existing computational grid and treat them as unified network computers, providing a familiar and comfortable environment that provides easy-to-understand scoping.


Journal of Physics: Conference Series | 2006

How the common component architecture advances computational science

Gary Kumfert; David E. Bernholdt; Thomas Epperly; James Arthur Kohl; Lois Curfman McInnes; Steven G. Parker; Jaideep Ray

Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2.

Collaboration


Dive into the James Arthur Kohl's collaboration.

Top Co-Authors

Avatar

David E. Bernholdt

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

George Al Geist

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John W Cobb

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stephen D Miller

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

V. E. Lynch

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Meili Chen

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael A. Reuter

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Torsten Wilde

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge