Jeff Daily
Pacific Northwest National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeff Daily.
Journal of Physics: Conference Series | 2007
Karen L. Schuchardt; Bruce J. Palmer; Jeff Daily; Todd O. Elsethagen; Annette Koontz
Global cloud resolving models at resolutions of 4km or less create significant challenges for simulation output, data storage, data management, and post-simulation analysis and visualization. To support efficient model output as well as data analysis, new methods for IO and data organization must be evaluated. The model we are supporting, the Global Cloud Resolving Model being developed at Colorado State University, uses a geodesic grid. The non-monotonic nature of the grids coordinate variables requires enhancements to existing data processing tools and community standards for describing and manipulating grids. The resolution, size and extent of the data suggest the need for parallel analysis tools and allow for the possibility of new techniques in data mining, filtering and comparison to observations. We describe the challenges posed by various aspects of data generation, management, and analysis, our work exploring IO strategies for the model, and a preliminary architecture, web portal, and tool enhancements which, when complete, will enable broad community access to the data sets in familiar ways to the community.
international parallel and distributed processing symposium | 2017
Nitin A. Gawande; Joshua Landwehr; Jeff Daily; Nathan R. Tallent; Abhinav Vishnu; Darren J. Kerbyson
Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors — including NVIDIA, Intel, AMD and IBM — have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.
international conference on big data | 2016
Charles Siegel; Jeff Daily; Abhinav Vishnu
We present novel techniques to accelerate the convergence of Deep Learning algorithms by conducting low overhead removal of redundant neurons — apoptosis of neurons — which do not contribute to model learning, during the training phase itself. We provide in-depth theoretical underpinnings of our heuristics (bounding accuracy loss and handling apoptosis of several neuron types), and present the methods to conduct adaptive neuron apoptosis. Specifically, we are able to improve the training time for several datasets by 2–3x, while reducing the number of parameters by up to 30× (4–5× on average) on datasets such as ImageNet classification. For the Higgs Boson dataset, our implementation improves the accuracy (measured by Area Under Curve (AUC)) for classification from 0.88/1 to 0.94/1, while reducing the number of parameters by 3x in comparison to existing literature. The proposed methods achieve a 2.44x speedup in comparison to the default (no apoptosis) algorithm.
advances in computing and communications | 2017
Jacob Hansen; Thomas W. Edgar; Jeff Daily; Di Wu
With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predetermined power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.
2017 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES) | 2017
Bryan Palmintier; Dheepak Krishnamurthy; Philip Top; Steve Smith; Jeff Daily; Jason C. Fuller
This paper describes the design rationale for the Hierarchical Engine for Large-scale Infrastructure Co-Simulation (HELICS), a new open-source, cyber-physical-energy co-simulation framework for electric power systems. HELICS is designed to support very-large-scale (100,000+ federates) co-simulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross-platform operating system support, the integration of both event-driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step. After describing the requirements, we evaluate existing co-simulation frameworks, including High-Level Architecture (HLA) and Functional Mockup Interface (FMI), and we conclude that none provide the required features. Then we describe the design for the new, layered HELICS architecture.
bioRxiv | 2018
Svetlana Lockwood; Kelly A. Brayton; Jeff Daily; Shira L. Broschat
To explore the concept of a minimal gene set, we clustered 8.76 M protein sequences deduced from 2,307 completely sequenced Proteobacterial genomes. To our knowledge this is the first study of this scale. Clustering resulted in 707,311 clusters of which 224,442 ranged in size from 2 to 2,894 sequences. The resulting clusters allowed us to ask the question: Is a set of proteins conserved across all Proteobacteria? We chose four essential proteins, the chaperonin GroEL, DNA dependent RNA polymerase subunits beta and beta’ (RpoB/RpoB’), and DNA polymerase I (PolA), representing fundamental cellular functions, and examined their distribution in the clusters. We found these proteins to be remarkably conserved. Although the groEL gene was universally conserved in all the organisms in the study, the protein was not represented in all the deduced proteomes. The genes for RpoB and RpoB’ were missing from two genomes and merged in 88 genomes, and the sequences were sufficiently divergent that they formed separate clusters for 18 RpoB proteins (seven clusters) and 14 RpoB’ proteins (three clusters). For PolA, 52 organisms lacked an identifiable sequence, and seven sequences were sufficiently divergent that they formed five separate clusters. Interestingly, organisms lacking an identifiable PolA and those with divergent RpoB/RpoB’ were almost all endosymbionts. Furthermore, we present a range of examples of annotation issues that caused the deduced proteins to be incorrectly represented in the proteome. These annotation issues represent a significant obstacle for high throughput analyses.
computer software and applications conference | 2014
Selim Ciraci; Jason C. Fuller; Jeff Daily; Atefe Makhmalbaf; David Callahan
In a standard workflow for the validation of a control system, the control system is implemented as an extension to a simulator. Such simulators are complex software systems, and engineers may unknowingly violate constraints a simulator places on extensions. As such, errors may be introduced in the implementation of either the control system or the simulator leading to invalid simulation results. This paper presents a novel runtime verification approach for verifying control system implementations within simulators. The major contribution of the approach is the two-tier specification process. In the first tier, engineers model constraints using a domain-specific language tailored to modeling a controllers response to changes in its input. The language is high-level and effectively hides the implementation details of the simulator, allowing engineers to specify design-level constraints independent of low-level simulator interfaces. In the second tier, simulator developers provide mapping rules for mapping design-level constraints to the implementation of the simulator. Using the rules, an automated tool transforms the design-level specifications into simulator-specific runtime verification specifications and generates monitoring code which is injected into the implementation of the simulator. During simulation, these monitors observe the input and output variables of the control system and report changes to the verifier. The verifier checks whether these changes follow the constraints of the control system. We describe application of this approach to the verification of the constraints of an HVAC control system implemented with the power grid simulator Grid LAB-D.
Iet Generation Transmission & Distribution | 2017
Renke Huang; Rui Fan; Jeff Daily; Andrew R. Fisher; Jason C. Fuller
arXiv: Distributed, Parallel, and Cluster Computing | 2018
Jeff Daily; Abhinav Vishnu; Charles Siegel; Thomas Warfel; Vinay C. Amatya
Future Generation Computer Systems | 2018
Nitin A. Gawande; Jeff Daily; Charles Siegel; Nathan R. Tallent; Abhinav Vishnu