Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven G. Smith is active.

Publication


Featured researches published by Steven G. Smith.


Monthly Weather Review | 2011

Development of a Coupled Groundwater–Atmosphere Model

Reed M. Maxwell; Julie K. Lundquist; Jeffrey D. Mirocha; Steven G. Smith; Carol S. Woodward; Andrew F. B. Tompson

Abstract Complete models of the hydrologic cycle have gained recent attention as research has shown interdependence between the coupled land and energy balance of the subsurface, land surface, and lower atmosphere. PF.WRF is a new model that is a combination of the Weather Research and Forecasting (WRF) atmospheric model and a parallel hydrology model (ParFlow) that fully integrates three-dimensional, variably saturated subsurface flow with overland flow. These models are coupled in an explicit, operator-splitting manner via the Noah land surface model (LSM). Here, the coupled model formulation and equations are presented and a balance of water between the subsurface, land surface, and atmosphere is verified. The improvement in important physical processes afforded by the coupled model using a number of semi-idealized simulations over the Little Washita watershed in the southern Great Plains is demonstrated. These simulations are initialized with a set of offline spinups to achieve a balanced state of ini...


Advances in Water Resources | 1998

Analysis of subsurface contaminant migration and remediation using high performance computing

Andrew F. B. Tompson; Robert D. Falgout; Steven G. Smith; William J. Bosl; Steven F. Ashby

Abstract Highly resolved simulations of groundwater flow, chemical migration and contaminant recovery processes are used to test the applicability of stochastic models of flow and transport in a typical field setting. A simulation domain encompassing a portion of the upper saturated aquifer materials beneath the Lawrence Livermore National Laboratory was developed to hierarchically represent known hydrostratigraphic units and more detailed stochastic representations of geologic heterogeneity within them. Within each unit, Gaussian random field models were used to represent hydraulic conductivity variation, as parameterized from well test data and geologic interpretation of spatial variability. Groundwater flow, transport and remedial extraction of two hypothetical contaminants were made in six different statistical realizations of the system. The effective flow and transport behavior observed in the simulations compared reasonably with the predictions of stochastic theories based upon the Gaussian models, even though more exacting comparisons were prevented by inherent nonidealities of the geologic model and flow system. More importantly, however, biases and limitations in the hydraulic data appear to have reduced the applicability of the Gaussian representations and clouded the utility of the simulations and effective behavior based upon them. This suggests a need for better and unbiased methods for delineating the spatial distribution and structure of geologic materials and hydraulic properties in field systems. High performance computing can be of critical importance in these endeavors, especially with respect to resolving transport processes within highly variable media.©1998 Elsevier Science Limited. All rights reserved


parallel computing | 1994

The design and evolution of Zipcode

Anthony Skjellum; Steven G. Smith; Nathan E. Doss; Alvin P. Leung

Abstract Zipcode is a message-passing and process-management system that was designed for multicomputers and homogeneous networks of computers in order to support libraries and large-scale multicomputer software. The system has evolved significantly over the last five years, based on our experiences and identified needs. Features of Zipcode that were originally unique to it, were its simulataneous support of static process groups, communication contexts, and virtual topologies, forming the ‘mailer’ data structure. Point-to-point and collective operations reference the underlying group, and use contexts to avoid mixing up messages. Recently, we have added ‘gather-send’ and ‘receive-scatter’ semantics, based on persistent Zipcode ‘invoices’, both as a means to simplify message passing, and as a means to reveal more potential runtime optimizations. Key features in Zipcode appear in the forthcoming MPI standard.


ieee international conference on high performance computing data and analytics | 1999

A Numerical Simulation of Groundwater Flow and Contaminant Transport on the CRAY T3D and C90 Supercomputers

Steven F. Ashby; William J. Bosl; Robert D. Falgout; Steven G. Smith; Andrew F. B. Tompson; Timothy J. Williams

Numerical simulations of groundwater flow and chemical transport through three-dimensional heterogeneous porous media are described. The authors employ two CRAY supercomputers for different parts of the decoupled calculation: the flow field is computed on the T3D massively parallel computer, and the contaminant migration is simulated on the C90 vector supercomputer. The authors compare simulation results for subsurface models based on homogeneous and heterogeneous conceptual models and find that the heterogeneities have a profound impact on the character of contaminant migration.


ieee international conference on high performance computing data and analytics | 1992

The Multicomputer Toolbox approach to concurrent BLAS and LACS

Robert D. Falgout; Anthony Skjellum; Steven G. Smith; Charles H. Still

The authors describe many of the issues involved in general-purpose concurrent basic linear algebra subprograms (concurrent BLAS or CBLAS) and discuss data-distribution independence, while further generalizing data distributions. They comment on the utility of linear algebra communication subprograms (LACS). They also describe an algorithm for dense matrix-matrix multiplication and also discuss matrix-vector multiplication issues. With regard to communication, they conclude that there is limited leverage in LACS per se as a stand-alone message-passing standard, and propose that needed capabilities instead be integrated in a general, application-level message passing standard, focusing attention on CBLAS and large-scale application needs. Most of the proposed LACS features are similar to existing or needed general-purpose primitives anyway. All of the ideas discussed have been implemented or are under current development within the Multicomputer Toolbox open software system.<<ETX>>


2015 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES) | 2015

A federated simulation toolkit for electric power grid and communication network co-simulation

Brian M. Kelley; Philip Top; Steven G. Smith; Carol S. Woodward; Liang Min

This paper introduces a federated simulation toolkit (FSKIT) that couples continuous time and discrete event simula- tions (DES) to perform the co-simulation of electric power grids and communication networks. A High Performance Computing (HPC) oriented power system dynamic simulator, GridDyn, was used for the electric power grid simulation. GridDyn is coupled to the open-source network simulator, ns-3, through FSKIT. FSKIT provides time control for advancing the state of federated simulators, and facilitates communication among objects in the federate. A wide-area communication-based electric transmission protection scheme is simulated with FSKIT, using the IEEE 39- bus test system. A communication network for the 39-bus system is built in ns-3, and basic protection relay logic is added to the power system model in order to perform the co-simulation.


software product lines | 1993

Modeling groundwater flow on MPPs

Steven F. Ashby; Robert D. Falgout; Steven G. Smith; Andrew F. B. Tompson

The numerical simulation of groundwater flow in three-dimensional heterogeneous porous media is examined. To enable detailed modeling of large contaminated sites, preconditioned iterative methods and massively parallel computing power are combined in a simulator called PARFLOW. After describing this portable and modular code, some numerical results are given, including one that demonstrates the codes scalability.<<ETX>>


Proceedings of the 2015 Workshop on ns-3 | 2015

Improving per processor memory use of ns-3 to enable large scale simulations

Steven G. Smith; David R. Jefferson; Peter D. Barnes; Sergei Nikolaev

In this paper we describe enhancements to improve the scaling of the ns-3 simulator for large problem sizes. The ns-3 simulator has a parallel capability however the current implementation instantiates the entire network topology on all ranks (processors). This restricts the problem sizes that could be run. We describe an approach to removing this limitation by distributing the network topology across ranks such that each rank only holds a part of the network topology. Performance studies were conducted to investigate the scaling performance of the modified ns-3 simulator.


Proceedings of the 2015 Workshop on ns-3 | 2015

Pushing the envelope in distributed ns-3 simulations: one billion nodes

Sergei Nikolaev; Eddy Banks; Peter D. Barnes; David R. Jefferson; Steven G. Smith

In this paper, we describe the results of simulation of very large (up to 109 nodes), planetary-scale networks using ns-3 simulator. The modeled networks consist of the small-world core graph of network routers and an equal number of the leaf nodes (one leaf node per router). Each bidirectional link in the simulation carries on-off traffic. Using LLNLs high-performance computing (HPC) clusters, we conducted strong and weak scaling studies, and investigated on-node scalability for MPI nodes. The scaling relations for both runtime and memory are derived. In addition we examine the packet transmission rate in the simulation and its scalability. Performance of the default ns-3 parallel scheduler is compared to the custom-designed NULL-message scheduler.


power and energy society general meeting | 2012

High-performance computing for electric grid planning and operations

Thomas Epperly; Thomas Edmunds; Alan Lamont; Carol Meyers; Steven G. Smith; Yiming Yao; Glenn Drayton

High-performance computing (HPC) is having a profound impact on scientific discovery and engineering in a variety of areas, and researchers are beginning to demonstrate how HPC can impact problems in energy grid planning and operations. Contemporary supercomputers can perform over 1015 floating point operations per second and have more than 1.4 petabytes of memory - roughly 5 orders of magnitude greater than a commodity PC workstation. This level of computing power changes the very nature of problems that can be solved. Researchers at LLNL have used HPC systems to accelerate execution of a renewables planning study, by solving a thousand unit commitment and dispatch problems in parallel; this generated new insights and allowed for a more detailed study than would have been otherwise achievable. Ongoing work at LLNL includes the development and testing of new parallel algorithms for unit commitment problems, including multi-scenario stochastic unit commitment. These algorithms will enable greater grid and time resolution and provide more accurate solutions because of the increase in model fidelity.

Collaboration


Dive into the Steven G. Smith's collaboration.

Top Co-Authors

Avatar

Robert D. Falgout

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andrew F. B. Tompson

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Steven F. Ashby

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Peter D. Barnes

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

William J. Bosl

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Anthony Skjellum

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David R. Jefferson

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sergei Nikolaev

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Carol S. Woodward

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charles H. Still

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge