Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas R. Furlani is active.

Publication


Featured researches published by Thomas R. Furlani.


Journal of Computational Chemistry | 2000

Q-Chem 2.0: A High-Performance Ab Initio Electronic Structure Program Package

Jing Kong; Christopher A. White; Anna I. Krylov; David Sherrill; Ross D. Adamson; Thomas R. Furlani; Michael S. Lee; Aaron M. Lee; Steven R. Gwaltney; Terry R. Adams; Christian Ochsenfeld; Andrew T. B. Gilbert; Gary S. Kedziora; Vitaly A. Rassolov; David Maurice; Nikhil Nair; Yihan Shao; Nicholas A. Besley; Paul E. Maslen; Jeremy P. Dombroski; Holger Daschel; Weimin Zhang; Prakashan P. Korambath; Jon Baker; Edward F. C. Byrd; Troy Van Voorhis; Manabu Oumi; So Hirata; Chao-Ping Hsu; Naoto Ishikawa

Q‐Chem 2.0 is a new release of an electronic structure program package, capable of performing first principles calculations on the ground and excited states of molecules using both density functional theory and wave function‐based methods. A review of the technical features contained within Q‐Chem 2.0 is presented. This article contains brief descriptive discussions of the key physical features of all new algorithms and theoretical models, together with sample calculations that illustrate their performance.


Journal of Chemical Physics | 1985

Theory of spin‐orbit coupling. Application to singlet–triplet interaction in the trimethylene biradical

Thomas R. Furlani; Harry F. King

Efficient methods are developed for the computation of spin‐orbit coupling constants in polyatomic molecules using complete active space multiconfiguration self‐consistent field wave functions. All electron–nuclear and electron–electron spin‐orbit interactions in the Breit–Pauli Hamiltonian are retained without storing or transforming spin‐orbit integrals. This technique is applied to the calculation of spin‐orbit coupling constants between singlet and triplet electronic states. Allowing nonorthogonality of the singlet and triplet molecular orbitals in the active space improves the quality of the wave functions and presents no serious computational difficulties. To test the method, spin‐orbit coupling constants are computed for the diatomic molecules NH, OH+, PH, and O2 and compared with similar calculations reported in the literature. Calculations are also carried out for the organic biradical trimethylene (ĊH2CH2ĊH2). The coupling constant is found to vary from 0 to 2.5 cm−1 depending upon geometry. It ...


international symposium on pervasive systems, algorithms, and networks | 2009

Towards Thermal Aware Workload Scheduling in a Data Center

Lizhe Wang; Gregor von Laszewski; Jai Dayal; Xi He; Andrew J. Younge; Thomas R. Furlani

High density blade servers are a popular technology for data centers, however, the heat dissipation density of data centers increases exponentially. There is strong evidence to support that high temperatures of such data centers will lead to higher hardware failure rates and thus an increase in maintenance costs. Improperly designed or operated data centers may either suffer from overheated servers and potential system failures, or from overcooled systems, causing extraneous utilities cost. Minimizing the cost of operation (utilities, maintenance, device upgrade and replacement) of data centers is one of the key issues involved with both optimizing computing resources and maximizing business outcome. This paper proposes an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then a thermal aware task scheduling algorithm is presented which aims to reduce power consumption and temperatures in a data center. A simulation study is carried out to evaluate the performance of the algorithm. Simulation results show that our algorithm can significantly reduce temperatures in data centers by introducing endurable decline in performance.


Journal of Computational Chemistry | 2005

Lennard–Jones parameters for the combined QM/MM method using the B3LYP/6‐31G*/AMBER potential

Marek Freindorf; Yihan Shao; Thomas R. Furlani; Jing Kong

A combined DFT quantum mechanical and AMBER molecular mechanical potential (QM/MM) is presented for use in molecular modeling and molecular simulations of large biological systems. In our approach we evaluate Lennard–Jones parameters describing the interaction between the quantum mechanical (QM) part of a system, which is described at the B3LYP/6‐31+G* level of theory, and the molecular mechanical (MM) part of the system, described by the AMBER force field. The Lennard–Jones parameters for this potential are obtained by calculating hydrogen bond energies and hydrogen bond geometries for a large set of bimolecular systems, in which one hydrogen bond monomer is described quantum mechanically and the other is treated molecular mechanically. We have investigated more than 100 different bimolecular systems, finding very good agreement between hydrogen bond energies and geometries obtained from the combined QM/MM calculations and results obtained at the QM level of theory, especially with respect to geometry. Therefore, based on the Lennard–Jones parameters obtained in our study, we anticipate that the B3LYP/6‐31+G*/AMBER potential will be a precise tool to explore intermolecular interactions inside a protein environment.


international performance computing and communications conference | 2009

Thermal aware workload scheduling with backfilling for green data centers

Lizhe Wang; Gregor von Laszewski; Jai Dayal; Thomas R. Furlani

Data centers now play an important role in modern IT infrastructures. Related research has shown that the energy consumption for data center cooling systems has recently increased significantly. There is also strong evidence to show that high temperatures with in a data center will lead to higher hardware failure rates and thus an increase in maintenance costs. This paper devotes itself in the field of thermal aware resource management for data centers. This paper proposes an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then a thermal aware task scheduling algorithm with backfilling is presented which aims to reduce power consumption and temperatures in a data center. A simulation study is carried out to evaluate the performance of the algorithm. Simulation results show that our algorithm can significantly reduce temperatures in data centers by introducing endurable decline in performance.


Journal of Computational Chemistry | 1995

Implementation of a parallel direct SCF algorithm on distributed memory computers

Thomas R. Furlani; Harry F. King

A parallel direct self‐consistent field (SCF) algorithm for distributed memory computers is described. Key features of the algorithm are its ability to achieve a load balance dynamically, its modest memory requirements per processor, and its ability to utilize the full eightfold index permutation symmetry of the two‐electron integrals despite the fact that entire copies of the Fock and density matrices are not present in each processors local memory. The algorithm is scalable and, accordingly, has the potential to function efficiently on hundreds of processors. With the algorithm described here, a calculation employing several thousand basis functions can be carried out on a distributed memory machine with 100 or more processors each with just 4 MBytes of RAM and no disk. The Fock matrix build portion of the algorithm has been implemented on a 16‐node Intel iPSC/2. Results from benchmark calculations are encouraging. The algorithm shows excellent load balance when run on 4, 8, or 16 processors and displays almost ideal speed‐up in going from 4 to 16 processors. Preliminary benchmark calculations have also been carried out on an Intel Paragon.


Molecular Physics | 2010

A parallel implementation of the analytic nuclear gradient for time-dependent density functional theory within the Tamm–Dancoff approximation

Fenglai Liu; Zhengting Gan; Yihan Shao; Chao-Ping Hsu; Martin Head-Gordon; Benjamin T. Miller; Bernard R. Brooks; Jian-Guo Yu; Thomas R. Furlani; Jing Kong

We derived the analytic gradient for the excitation energies from a time-dependent density functional theory calculation within the Tamm–Dancoff approximation (TDDFT/TDA) using Gaussian atomic orbital basis sets, and introduced an efficient serial and parallel implementation. Some timing results are shown from a B3LYP/6-31G**/SG-1-grid calculation on zincporphyrin. We also performed TDDFT/TDA geometry optimizations for low-lying excited states of 20 small molecules, and compared adiabatic excitation energies and optimized geometry parameters to experimental values using the B3LYP and ωB97 functionals. There are only minor differences between TDDFT and TDA optimized excited state geometries and adiabatic excitation energies. Optimized bond lengths are in better agreement with experiment for both functionals than either CC2 or SOS-CIS(D0), while adiabatic excitation energies are in similar or slightly poorer agreement. Optimized bond angles with both functionals are more accurate than CIS values, but less accurate than either CC2 or SOS-CIS(D0) ones.


extreme science and engineering discovery environment | 2013

Using XDMoD to facilitate XSEDE operations, planning and analysis

Thomas R. Furlani; Barry L. Schneider; Matthew D. Jones; John Towns; David L. Hart; Steven M. Gallo; Robert L. DeLeon; Charng Da Lu; Amin Ghadersohi; Ryan J. Gentner; Abani K. Patra; Gregor von Laszewski; Fugang Wang; Jeffrey T. Palmer; Nikolay Simakov

The XDMoD auditing tool provides, for the first time, a comprehensive tool to measure both utilization and performance of high-end cyberinfrastructure (CI), with initial focus on XSEDE. Here, we demonstrate, through several case studies, its utility for providing important metrics regarding resource utilization and performance of TeraGrid/XSEDE that can be used for detailed analysis and planning as well as improving operational efficiency and performance. Measuring the utilization of high-end cyberinfrastructure such as XSEDE helps provide a detailed understanding of how a given CI resource is being utilized and can lead to improved performance of the resource in terms of job throughput or any number of desired job characteristics. In the case studies considered here, a detailed historical analysis of XSEDE usage data using XDMoD clearly demonstrates the tremendous growth in the number of users, overall usage, and scale of the simulations routinely carried out. Not surprisingly, physics, chemistry, and the engineering disciplines are shown to be heavy users of the resources. However, as the data clearly show, molecular biosciences are now a significant and growing user of XSEDE resources, accounting for more than 20 percent of all SUs consumed in 2012. XDMoD shows that the resources required by the various scientific disciplines are very different. Physics, Astronomical sciences, and Atmospheric sciences tend to solve large problems requiring many cores. Molecular biosciences applications on the other hand, require many cycles but do not employ core counts that are as large. Such distinctions are important in guiding future cyberinfrastructure design decisions. XDMoDs implementation of a novel application kernel-based auditing system to measure overall CI system performance and quality of service is shown, through several examples, to provide a useful means to automatically detect under performing hardware and software. This capability is especially critical given the complex composition of todays advanced CI. Examples include an application kernel based on a widely used quantum chemistry program that uncovered a software bug in the I/O stack of a commercial parallel file system, which was subsequently fixed by the vendor in the form of a software patch that is now part of their standard release. This error, which resulted in dramatically increased execution times as well as outright job failure, would likely have gone unnoticed for sometime and was only uncovered as a result of implementation of XDMoDs suite of application kernels.


ieee international conference on high performance computing data and analytics | 2014

Comprehensive resource use monitoring for HPC systems with TACC stats

R. Todd Evans; William L. Barth; James C. Browne; Robert L. DeLeon; Thomas R. Furlani; Steven M. Gallo; Matthew D. Jones; Abani K. Patra

This paper reports on a comprehensive, fully automated resource use monitoring package, TACC Stats, which enables both consultants, users and other stakeholders in an HPC system to systematically and actively identify jobs/applications that could benefit from expert support and to aid in the diagnosis of software and hardware issues. TACC Stats continuously collects and analyzes resource usage data for every job run on a system and differs significantly from conventional profilers because it requires no action on the part of the user or consultants -- it is always collecting data on every node for every job. TACC Stats is open source and downloadable, configurable and compatible with general Linux-based computing platforms, and extensible to new CPU architectures and hardware devices. It is meant to provide a comprehensive resource usage monitoring solution. In addition to describing TACC Stats, the paper illustrates its application to identifying production jobs which have inefficient resource use characteristics.


Concurrency and Computation: Practice and Experience | 2013

Performance metrics and auditing framework using application kernels for high-performance computer systems

Thomas R. Furlani; Matthew D. Jones; Steven M. Gallo; Andrew E. Bruno; Charng-Da Lu; Amin Ghadersohi; Ryan J. Gentner; Abani K. Patra; Robert L. DeLeon; Gregor von Laszewski; Fugang Wang; Ann Zimmerman

This paper describes XSEDE Metrics on Demand, a comprehensive auditing framework for use by high‐performance computing centers, which provides metrics regarding resource utilization, resource performance, and impact on scholarship and research. This role‐based framework is designed to meet the following objectives: (1) provide the user community with a tool to manage their allocations and optimize their resource utilization; (2) provide operational staff with the ability to monitor and tune resource performance; (3) provide management with a tool to monitor utilization, user base, and performance of resources; and (4) provide metrics to help measure scientific impact. Although initially focused on the XSEDE program, XSEDE Metrics on Demand can be adapted to any high‐performance computing environment. The framework includes a computationally lightweight application kernel auditing system that utilizes performance kernels to measure overall system performance. This allows continuous resource auditing to measure all aspects of system performance including filesystem performance, processor and memory performance, and network latency and bandwidth. Metrics that focus on scientific impact, such as publications, citations and external funding, will be included to help quantify the important role high‐performance computing centers play in advancing research and scholarship. Copyright

Collaboration


Dive into the Thomas R. Furlani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge