John D. Siirola
Sandia National Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John D. Siirola.
Computers & Chemical Engineering | 2011
Bri-Mathias S. Hodge; Shisheng Huang; John D. Siirola; Joseph F. Pekny; Gintaras V. Reklaitis
Abstract The modern world energy system is highly complex and interconnected and the effects of energy policies may have unintended consequences. Modeling and analysis tools can therefore be crucial to gaining insight into the interactions between system components and formulating policies that will shape the future energy system. We present in this work a multi-paradigm modeling framework that allows for the continual adjustment and refinement of energy system models as the understanding of the system under study increases. This flexible and open framework allows for the consideration of different levels of model aggregation, timescales and geographic considerations within the same model through the use of different modeling formalisms. We also present a case study of the combined California natural gas and electricity systems that illustrates how the framework may be used to account for the significant uncertainty that exists within the system.
Computers & Chemical Engineering | 2012
Sean Legg; A.J. Benavides-Serrano; John D. Siirola; Jean-Paul Watson; S.G. Davis; A. Bratteteig; Carl D. Laird
Abstract A stochastic programming formulation is developed for determining the optimal placement of gas detectors in petrochemical facilities. FLACS, a rigorous gas dispersion package, is used to generate hundreds of scenarios with different leak locations and weather conditions. Three problem formulations are investigated: minimization of expected detection time, minimization of expected detection time including a coverage constraint, and a placement based on coverage alone. The extensive forms of these optimization problems are written in Pyomo and solved using CPLEX. A sampling procedure is used to find confidence intervals on the optimality gap and quantify the effectiveness of detector placements on alternate subsamples of scenarios. Results show that the additional coverage constraint significantly improves performance on alternate subsamples. Furthermore, both optimization-based approaches dramatically outperform the coverage-only approach, making a strong case for the use of rigorous dispersion simulation coupled with stochastic programming to improve the effectiveness of these safety systems.
Journal of Water Resources Planning and Management | 2016
Arpan Seth; Katherine A. Klise; John D. Siirola; Terranna Haxton; Carl D. Laird
AbstractIn the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections, and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. The...
Computers & Chemical Engineering | 2013
Zev Friedman; Jack Ingalls; John D. Siirola; Jean-Paul Watson
Abstract We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to “flatten” the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based “block-oriented” modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N xa0−xa01 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior.
Computer-aided chemical engineering | 2009
Yu Zhu; Daniel P. Word; John D. Siirola; Carl D. Laird
Abstract While large-scale nonlinear programming (NLP) has seen widespread use within the process industries, the desire to solve larger and more complex problems drives continued improvements in NLP solvers. Because of physical hardware limitations, manufacturers have shifted their focus towards multi-core and other modern parallel computing architectures, and we must focus efforts on the development of parallel computing solutions for large-scale nonlinear programming. In this paper we briefly describe some existing and emerging architectures for parallel computing and demonstrate the potential of some of these architectures for parallel solution of nonlinear programming problems. In particular, we show the scalability of an integrated design and control problem using two techniques within a multi-core architecture.
Mathematical Programming Computation | 2018
Bethany L. Nicholson; John D. Siirola; Jean-Paul Watson; Victor M. Zavala; Lorenz T. Biegler
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differential equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.
Computer-aided chemical engineering | 2014
Jia Kang; John D. Siirola; Carl D. Laird
Abstract This paper presents a nonlinear stochastic programming formulation for a large-scale contingency-constrained optimal power flow problem. Using a rectangular IV formulation to model AC power flow in the transmission network, we construct a nonlinear, multi-scenario optimization formulation where each scenario considers failure of an individual transmission element. Given the number of potential failures in the network, these problems are very large; yet need to be solved rapidly. In this paper, we demonstrate that this multi-scenario problem can be solved quickly using a parallel decomposition approach based on nonlinear interior-point methods. Parallel and serial timing results are shown using a test example from Matpower, a MATLAB-based framework for power flow.
Computer-aided chemical engineering | 2009
John D. Siirola
Extended Abstract Process Systems Engineering (PSE) is built on the application of computational tools to the solution of physical engineering problems. Over the course of its nearly five decade history, advances in PSE have relied roughly equally on advancements in desktop computing technology and developments of new tools and approaches for representing and solving problems (Westerberg, 2004). Just as desktop computing development over that period focused on increasing the net serial instruction rate, tool development in PSE has emphasized creating faster general-purpose serial algorithms. However, in recent years the increase in net serial instruction rate has slowed dramatically, with processors first reaching an effective upper limit for clock speed and now approaching apparent limits for microarchitecture efficiency. Current trends in desktop processor development suggest that future performance gains will occur primarily through exploitation of parallelism. For PSE to continue to leverage the “free” advancements from desktop computing technology in the future, the PSE toolset will need to embrace the use of parallelization. Unfortunately, “parallelization” is more than just identifying multiple things to do at once. Parallel algorithm design has two fundamental challenges: first, to match the characteristics of the parallelizable problem workload to the capabilities of the hardware platform, and second to properly balance parallel computation with the overhead of communication and synchronization on that platform. The performance of any parallel algorithm is thus a strong function of how well the characteristics of the problem and algorithm match those of the hardware platform on which it will run. This has led to a proliferation of highly specialized parallel hardware platforms, each designed around specific problems or problem classes. While every platform has its own unique characteristics, we can group current approaches into six basic classes: symmetric multiprocessing (SMP), networks of workstations (NOW), massively parallel processing (MPP), specialized coprocessors, multi-threaded shared memory, and hybrids that combine components of the first five classes. Perhaps the most familiar of these is the SMP architecture, which forms the bulk of current the desktop and workstation market. These systems have multiple processing units (processors and/or cores) controlled by a single operating system image and sharing a single common shared memory space. While SMP systems provide only a modest level of parallelism (typically 2-16 processing units), the existence of shared memory and full-featured processing units makes them perhaps the most straightforward development platform. A challenge of SMP platforms is the discrepancy between the speed of the processor and the memory system: both latency and overall memory bandwidth limitations can lead to processors idling waiting for data. Clusters, a generic term for coordinated groups of independent computers (nodes) connected with high-speed networks, provide the opportunity for a radically different level of parallelism, with the largest clusters having over 25,000 nodes and 100,000 processing units. The challenge with clusters is memory is distributed across independent nodes. Communication and coordination among nodes must be explicitly managed and occurs over a relatively high latency network interconnect. Efficient use of this architecture requires applications that decompose into pseudo-independent components that run with high computation to communication ratios. The level to which systems utilize commodity components distinguishes the two main types of cluster architectures, with NOW nodes running commodity network interconnects and operating systems and MPP nodes using specialized or proprietary network layers or microkernels. Specialized coprocessors, including graphics processing units (GPU) and the Cell Broadband Engine (Cell), are gaining popularity as scientific computing platforms. These platforms employ non-general purpose dependent processing units to speed fine-grained, repetitive processing. Architecturally, they are reminiscent of vector computing, combining very fast access to a small amount of local memory with processing elements implementing either a single-instruction-multiple-data (SIMD) (GPU) or a pipelined (Cell) model. As application developers must explicitly manage both parallelism on the coprocessor and the movement of data to and from the coprocessor memory space, these architectures can be some of the most challenging to program. Finally, multi-threaded shared memory (MTSM) systems represent a fundamental departure from traditional distributed memory systems like NOW and MPP. Instead of a collection of independent nodes and memory spaces, an MTSM system runs a single system image across all nodes, combining all node memory into a single coherent shared memory space. To a developer, the MTSM appears to be a single very large SMP. However, unlike a SMP that uses caches to reduce the latency of a memory access, the MTSM tolerates latency by using a large number of concurrent threads. While this architecture lends itself to problems that are not readily decomposable, effective utilization of MTSM systems requires applications to run hundreds – or thousands – of concurrent threads. The proliferation of specialized parallel computing architectures presents several significant challenges for developers of parallel modeling and optimization applications. Foremost is the challenge of selecting the “appropriate” platform to target when developing the application. While it is clear that architectural characteristics can significantly affect the performance of an algorithm, relatively few rules or heuristics exist for selecting a platform based solely on application characteristics. A contributing challenge is that different architectures employ fundamentally different programming paradigms, libraries, and tools. Knowledge and experience on one platform does not necessarily translate to other platforms. This also complicates the process of directly comparing platform performance, as applications are rarely portable: software designed for one platform rarely compiles on another without modification, and the modifications may require a redesign of the fundamental parallelization approach. A final challenge is effectively communicating parallel results. While the relatively homogenous environment of serial desktop computing facilitated extremely terse descriptions of a test platform, often limited to processor make and clock speed, reporting results for parallel architectures must include not only processor information, but depending on the architecture, also include operating system, network interconnect, coprocessor make, model, and interconnect, and node configuration. There are numerous examples of algorithms and applications designed explicitly to leverage specific architectural features of parallel systems. While by no means comprehensive, three current representative efforts are the development of parallel branch and bound algorithms, distributed collaborative optimization algorithms, and multithreaded parallel discrete event simulation. PICO, the Parallel Integer and Combinatorial Optimizer (Eckstein, et al., 2001), is a scalable parallel mixed-integer linear optimizer. Designed explicitly for cluster environments (both NOW and MPP), PICO leverages the synergy between the inherently decomposable branch and bound tree search and the independent nature of the nodes within a cluster by distributing the independent sub-problems for the tree search across the nodes of the cluster. In contrast, agent-based collaborative optimization (Siirola, et al., 2004, 2007) matches traditionally non-decomposable nonlinear programming algorithms to high-latency clusters (e.g. NOWs or Grids) by replicating serial search algorithms intact and unmodified across the independent nodes of the cluster. The system then enforces collaboration through sharing intermediate “solutions” to the common problem. This creates a decomposable artificial meta-algorithm with a high computation to communication ratio that can scale efficiently on large, high latency, low bandwidth cluster environments. For modeling applications, efficiently parallelizing discrete event simulations has presented a longstanding challenge, with several decades of study and literature (Perumalla, 2006). The central challenge in parallelizing discrete event simulations on traditional distributed memory clusters is efficiently synchronizing the simulation time across the processing nodes during a simulation. A promising alternative approach leverages the Cray XMT (formerly called Eldorado; Feo, et al. 2005). The XMT implements an MTSM architecture and provides a single shared memory space across all nodes, greatly simplifying the time synchronization challenge. Further, the fine-grained parallelism provided by the architecture opens new opportunities for additional parallelism beyond simple event parallelization, for example, parallelizing the event queue management. While these three examples are a small subset of current parallel algorithm design, they demonstrate the impact that parallel architectures have had and will continue to have on future developments for modeling and optimization in PSE.
Optica | 2017
Michael Gehl; Christopher M. Long; Doug Trotter; Andrew Starbuck; Andrew Pomerene; Jeremy B. Wright; Seth D. Melgaard; John D. Siirola; Anthony L. Lentine; Christopher T. DeRose
We demonstrate the operation of silicon micro-disk modulators at temperatures as low as 3.8K. We characterize the steady-state and high-frequency performance and look at the impact of doping concentration.
Archive | 2017
William E. Hart; Carl D. Laird; Jean-Paul Watson; David L. Woodruff; Gabriel A. Hackebeil; Bethany L. Nicholson; John D. Siirola
This chapter provides a primer on optimization and mathematical modeling. It does not provide a complete description of these topics. Instead, this chapter provides enough background information to support reading the rest of the book. For more discussion of optimization modeling techniques see, for example, Williams [86]. Implementations of simple examples of models are shown to provide the reader with a quick start to using Pyomo.