Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph P. Kenny is active.

Publication


Featured researches published by Joseph P. Kenny.


Journal of Computational Chemistry | 2007

PSI3: An open‐source Ab Initio electronic structure package

T. Daniel Crawford; C. David Sherrill; Edward F. Valeev; Justin T. Fermann; Rollin A. King; Matthew L. Leininger; Shawn T. Brown; Curtis L. Janssen; Edward T. Seidl; Joseph P. Kenny; Wesley D. Allen

PSI3 is a program system and development platform for ab initio molecular electronic structure computations. The package includes mature programming interfaces for parsing user input, accessing commonly used data such as basis‐set information or molecular orbital coefficients, and retrieving and storing binary data (with no software limitations on file sizes or file‐system‐sizes), especially multi‐index quantities such as electron repulsion integrals. This platform is useful for the rapid implementation of both standard quantum chemical methods, as well as the development of new models. Features that have already been implemented include Hartree‐Fock, multiconfigurational self‐consistent‐field, second‐order Møller‐Plesset perturbation theory, coupled cluster, and configuration interaction wave functions. Distinctive capabilities include the ability to employ Gaussian basis functions with arbitrary angular momentum levels; linear R12 second‐order perturbation theory; coupled cluster frequency‐dependent response properties, including dipole polarizabilities and optical rotation; and diagonal Born‐Oppenheimer corrections with correlated wave functions. This article describes the programming infrastructure and main features of the package. PSI3 is available free of charge through the open‐source, GNU General Public License.


international conference on data mining | 2005

Higher-order Web link analysis using multilinear algebra

Tamara G. Kolda; Brett W. Bader; Joseph P. Kenny

Linear algebra is a powerful and proven tool in Web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score Web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the Web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the Web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative Web pages.


International Journal of Distributed Systems and Technologies | 2010

A Simulator for Large-Scale Parallel Computer Architectures

Helgi Adalsteinsson; Scott Cranford; David A. Evensky; Joseph P. Kenny; Jackson R. Mayo; Ali Pinar; Curtis L. Janssen

Efficient design of hardware and software for large-scale parallel execution requires detailed understanding of the interactions between the application, computer, and network. The authors have developed a macro-scale simulator SST/macro that permits the coarse-grained study of distributed-memory applications. In the presented work, applications using the Message Passing Interface MPI are simulated; however, the simulator is designed to allow inclusion of other programming models. The simulator is driven from either a trace file or a skeleton application. Trace files can be either a standard format Open Trace Format or a more detailed custom format DUMPI. The simulator architecture is modular, allowing it to easily be extended with additional network models, trace file formats, and more detailed processor models. This paper describes the design of the simulator, provides performance results, and presents studies showing how application performance is affected by machine characteristics.


Journal of Computational Chemistry | 2004

Component-based integration of chemistry and optimization software

Joseph P. Kenny; Steven J. Benson; Yuri Alexeev; Jason Sarich; Curtis L. Janssen; Lois Curfman McInnes; Manojkumar Krishnan; Jarek Nieplocha; Elizabeth Jurrus; Carl Fahlstrom; Theresa L. Windus

Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component‐based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.


measurement and modeling of computer systems | 2011

Using simulation to design extremescale applications and architectures: programming model exploration

Curtis L. Janssen; Helgi Adalsteinsson; Joseph P. Kenny

A key problem facing application developers is that they are expected to utilize extreme levels of parallelism soon after delivery of future leadership class machines, but developing applications capable of exposing sufficient concurrency is a time consuming process requiring experimentation. At the same time, due to the expense of building and operating an exascale machine, it will be necessary to apply tighter engineering margins to their design. Simple metrics such as the computation-communication ratio will not sufficiently specify machine requirements. Simulation fills this gap, allowing the study of extreme-scale architectures with the explicit inclusion of the complex interactions between the various hardware and software components, and can be used for correctness-checking as well as performance estimation. The simulator we discuss in this paper can be driven by reading trace files, typically generated by an actual application that has been run on real hardware, or by using a skeleton application. The skeleton application is designed to have the control flow of a real application, but with expensive computations and large data transfers replaced by discrete events for which the timings are determined by models. Using skeleton applications, we can predict application performance at levels of parallelism unobtainable on any current computational platform. The skeleton application can be modified to experiment with different communication strategies and programming models. Since the machine being simulated is in our control, we can experiment with different network topologies, routing algorithms, bandwidths, latencies, failure modes, core-to-node ratios, etc. In this paper, we use the Structural Simulation Toolkit macroscale components for coarse-grained simulation to illustrate the exploration of alternative programming models at extreme scale.


Journal of Computational Chemistry | 2008

Components for integral evaluation in quantum chemistry

Joseph P. Kenny; Curtis L. Janssen; Edward F. Valeev; Theresa L. Windus

Sharing low‐level functionality between software packages enables more rapid development of new capabilities and reduces the duplication of work among development groups. Using the component approach advocated by the Common Component Architecture Forum, we have designed a flexible interface for sharing integrals between quantum chemistry codes. Implementation of these interfaces has been undertaken within the Massively Parallel Quantum Chemistry package, exposing both the IntV3 and Cints/Libint integrals packages to component applications. Benchmark timings for Hartree‐Fock calculations demonstrate that the overhead due to the added interface code varies significantly, from less than 1% for small molecules with large basis sets to nearly 10% for larger molecules with smaller basis sets. Correlated calculations and density functional approaches encounter less severe performance overheads of less than 5%. While these overheads are acceptable, additional performance losses occur when arbitrary implementation details, such as integral ordering within buffers, must be handled. Integral reordering is observed to add an additional overhead as large as 12%; hence, a common standard for such implementation details is desired for optimal performance.


Journal of Physics: Conference Series | 2006

Enabling new capabilities and insights from quantum chemistry by using component architectures

Curtis L. Janssen; Joseph P. Kenny; Ida M. B. Nielsen; Manoj Kumar Krishnan; Vidhya Gurumoorthi; Edward F. Valeev; Theresa L. Windus

Steady performance gains in computing power, as well as improvements in Scientific computing algorithms, are making possible the study of coupled physical phenomena of great extent and complexity. The software required for such studies is also very complex and requires contributions from experts in multiple disciplines. We have investigated the use of the Common Component Architecture (CCA) as a mechanism to tackle some of the resulting software engineering challenges in quantum chemistry, focusing on three specific application areas. In our first application, we have developed interfaces permitting solvers and quantum chemistry packages to be readily exchanged. This enables our quantum chemistry packages to be used with alternative solvers developed by specialists, remedying deficiencies we discovered in the native solvers provided in each of the quantum chemistry packages. The second application involves development of a set of components designed to improve utilization of parallel machines by allowing multiple components to execute concurrently on subsets of the available processors. This was found to give substantial improvements in parallel scalability. Our final application is a set of components permitting different quantum chemistry packages to interchange intermediate data. These components enabled the investigation of promising new methods for obtaining accurate thermochemical data for reactions involving heavy elements.


international conference on parallel processing | 2013

Validation and uncertainty assessment of extreme-scale HPC simulation through bayesian inference

Jeremiah J. Wilke; Khachik Sargsyan; Joseph P. Kenny; Bert J. Debusschere; Habib N. Najm; Gilbert Hendry

Simulation of high-performance computing (HPC) systems plays a critical role in their development - especially as HPC moves toward the co-design model used for embedded systems, tying hardware and software into a unified design cycle. Exploring system-wide tradeoffs in hardware, middleware and applications using high-fidelity cycle-accurate simulation, however, is far too costly. Coarse-grained methods can provide efficient, accurate simulation but require rigorous uncertainty quantification (UQ) before using results to support design decisions. We present here SST/macro, a coarse-grained structural simulator providing flexible congestion models for low-cost simulation. We explore the accuracy limits of coarse-grained simulation by deriving error distributions of model parameters using Bayesian inference. Propagating these uncertainties through the model, we demonstrate SST/macros utility in making conclusions about performance tradeoffs for a series of MPI collectives. Low-cost and high-accuracy simulations coupled with UQ methodology make SST/macro a powerful tool for rapidly prototyping systems to aid extreme-scale HPC co-design.


international conference on quality software | 2009

Adaptive Application Composition in Quantum Chemistry

Li Li; Joseph P. Kenny; Meng-Shiou Wu; Kevin A. Huck; Alexander Gaenko; Mark S. Gordon; Curtis L. Janssen; Lois Curfman McInnes; Hirotoshi Mori; Heather Marie Netzloff; Boyana Norris; Theresa L. Windus

Component interfaces, as advanced by the Common Component Architecture (CCA), enable easy access to complex software packages for high-performance scientific computing. A recent focus has been incorporating support for computational quality of service (CQoS), or the automatic composition, substitution, and dynamic reconfiguration of component applications. Several leading quantum chemistry packages have achieved interoperability by adopting CCA components. Running these computations on diverse computing platforms requires selection among many algorithmic and hardware configuration parameters; typical educated guesses or trial and error can result in unexpectedly low performance. Motivated by the need for faster runtimes and increased productivity for chemists, we present a flexible CQoS approach for quantum chemistry that uses a generic CQoS database component to create a training database with timing results and metadata for a range of calculations. The database then interacts with a chemistry CQoS component and other infrastructure to facilitate adaptive application composition for new calculations.


Journal of Physics: Conference Series | 2005

Component-based software for high-performance scientific computing

Yuri Alexeev; Benjamin A. Allan; Robert C. Armstrong; David E. Bernholdt; Tamara L. Dahlgren; Dennis Gannon; Curtis L. Janssen; Joseph P. Kenny; Manojkumar Krishnan; James Arthur Kohl; Gary Kumfert; Lois Curfman McInnes; Jarek Nieplocha; Steven Parker; Craig Rasmussen; Theresa L. Windus

Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

Collaboration


Dive into the Joseph P. Kenny's collaboration.

Top Co-Authors

Avatar

Jeremiah J. Wilke

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Curtis L. Janssen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gilbert Hendry

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Khachik Sargsyan

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Samuel Knight

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Cy P. Chan

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge