Jared M. Chase
Pacific Northwest National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jared M. Chase.
Journal of Chemical Information and Modeling | 2007
Karen L. Schuchardt; Brett T. Didier; Todd O. Elsethagen; Lisong Sun; Vidhya Gurumoorthi; Jared M. Chase; Jun Li; Theresa L. Windus
Basis sets are some of the most important input data for computational models in the chemistry, materials, biology, and other science domains that utilize computational quantum mechanics methods. Providing a shared, Web-accessible environment where researchers can not only download basis sets in their required format but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a Web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.
ieee congress on services | 2009
Jared M. Chase; Ian Gorton; Chandrika Sivaramakrishnan; Justin Almquist; Adam S. Wynne; George Chin; Terence Critchlow
Scientific applications are often structured as workflows that execute a series of interdependent, distributed software modules to analyze large data sets. The order of execution of the tasks in a workflow is commonly controlled by complex scripts, which over time become difficult to maintain and evolve. In this paper, we describe how we have integrated the Kepler scientific workflow platform with the MeDICi Integration Framework, which has been specifically designed to provide a standards-based, lightweight and flexible integration platform. The MeDICi technology provides a scalable, component-based architecture that efficiently handles integration with heterogeneous, distributed software systems. This paper describes the MeDICi Integration Framework and the mechanisms we used to integrate MeDICi components with Kepler workflow actors. We illustrate this solution with a workflow application for an atmospheric sciences application. The resulting solution promotes a strong separation of concerns, simplifying the Kepler workflow description and promoting the creation of a reusable collection of components available for other workflow applications in this domain.
component based software engineering | 2009
Ian Gorton; Jared M. Chase; Adam S. Wynne; Justin Almquist; Alan R. Chappell
Scientific applications are often structured as workflows that execute a series of distributed software modules to analyze large data sets. Such workflows are typically constructed using general-purpose scripting languages to coordinate the execution of the various modules and to exchange data sets between them. While such scripts provide a cost-effective approach for simple workflows, as the workflow structure becomes complex and evolves, the scripts quickly become complex and difficult to modify. This makes them a major barrier to easily and quickly deploying new algorithms and exploiting new, scalable hardware platforms. In this paper, we describe the MeDICi Workflow technology that is specifically designed to reduce the complexity of workflow application development, and to efficiently handle data intensive workflow applications. MeDICi integrates standard component-based and service-based technologies, and employs an efficient integration mechanism to ensure large data sets can be efficiently processed. We illustrate the use of MeDICi with a climate data processing example that we have built, and describe some of the new features we are creating to further enhance MeDICi Workflow applications.
ieee congress on services | 2008
Jared M. Chase; Karen L. Schuchardt; George Chin; Jeffrey A. Daily; Timothy D. Scheibe
Numerical simulators are frequently used to assess future risks, support remediation and monitoring program decisions, and assist in design of specific remedial actions with respect to groundwater contaminants. Due to the complexity of the subsurface environment and uncertainty in the models, many alternative simulations must be performed, each producing data that is typically postprocessed and analyzed before deciding on the next set of simulations. Though parts of the process are readily amenable to automation through scientific workflow tools, the larger ldquoresearch workflowrdquo is not supported by current tools. We present a detailed use case for subsurface modeling, describe the use case in terms of workflow structure, briefly summarize a prototype that seeks to facilitate the overall modeling process, and discuss the many challenges for building such a comprehensive environment.
Journal of Physics: Conference Series | 2007
Karen L. Schuchardt; Gary D. Black; Jared M. Chase; Todd O. Elsethagen; Lisong Sun
Applying subsurface simulation codes to understand heterogeneous flow and transport problems is a complex process potentially involving multiple models, multiple scales, and spanning multiple scientific disciplines. A typical end-to-end process involves many tools, scripts and data sources usually shared only though informal channels. Additionally, the process contains many sub-processes that are repeated frequently and could be automated and shared. Finally, keeping records of the models, processes, and correlation between inputs and outputs is currently manual, time consuming and error prone. We are developing a software framework that integrates a workflow execution environment, shared data repository, and analysis and visualization tools to support development and use of new hybrid subsurface simulation codes. We are taking advantage of recent advances in scientific process automation using the Kepler system and advances in data services based on content management. Extensibility and flexibility are key underlying design considerations to support the constantly changing set of tools, scripts, and models available. We describe the architecture and components of this system with early examples of applying it to a continuum subsurface model.
international conference on service oriented computing | 2012
Yan Liu; Jared M. Chase; Ian Gorton
Power grids are increasingly incorporating high quality, high throughput sensor devices inside power distribution networks. These devices are driving an unprecedented increase in the volume and rate of available information. The real-time requirements for handling this data are beyond the capacity of conventional power models running in central utilities. Hence, we are exploring distributed power models deployed at the regional scale. The connection of these models for a larger geographic region is supported by a distributed system architecture. This architecture is built in a service oriented style, whereby distributed power models running on high performance clusters are exposed as services. Each service is semantically annotated and therefore can be discovered through a service catalog and composed into workflows. The overall architecture has been implemented as an integrated workflow environment useful for power researchers to explore newly developed distributed power models.
many task computing on grids and supercomputers | 2011
Khushbu Agarwal; Jared M. Chase; Karen L. Schuchardt; Timothy D. Schiebe; Bruce J. Palmer; Todd O. Elsethagen
Continuum scale models have been used to study subsurface flow, transport, and reactions for many years. Recently, pore scale models, which operate at scales of individual soil grains, have been developed to more accurately model pore scale phenomena, such as precipitation, that may not be well represented at the continuum scale. However, particle-based models become prohibitively expensive for modeling realistic domains. Instead, we are developing a hybrid model that simulates the full domain at continuum scale and applies the pore model only to areas of high reactivity. The hybrid model uses a dimension reduction approach to formulate the mathematical exchange of information across scales. Since the location, size, and number of pore regions in the model varies, an adaptive Pore Generator is being implemented to define pore regions at each iteration. A fourth code will provide data transformation from the pore scale back to the continuum scale. These components are coupled into a single hybrid model using the Swift workflow system. Our hybrid model workflow simulates a kinetic controlled mixing reaction in which multiple pore-scale simulations occur for every continuum scale time step. Each pore-scale simulation is itself parallel, thus exhibiting multi-level parallelism. Our workflow manages these multiple parallel tasks simultaneously, with the number of tasks changing across iterations. It also supports dynamic allocation of job resources and visualization processing at each iteration. We discuss the design, implementation and challenges associated with building a scalable, Many Parallel Task, hybrid model to run efficiently on thousands to tens of thousands of processors.
Journal of Physics: Conference Series, 180(1):Article No. 012065 | 2009
Karen L. Schuchardt; Jared M. Chase; Jeffrey A. Daily; Todd O. Elsethagen; Bruce J. Palmer; Timothy D. Scheibe
The Support Architecture for Large-Scale Subsurface Analysis (SALSSA) provides an extensible framework, sophisticated graphical user interface (GUI), and underlying data management system that simplifies the process of running subsurface models, tracking provenance information, and analyzing the model results. The SALSSA software framework is currently being applied to validating the Smoothed Particle Hydrodynamics (SPH) model. SPH is a three-dimensional model of flow and transport in porous media at the pore scale. Because fluid flow in porous media at velocities common in natural porous media occur at low Reynolds numbers, it is important to verify that the SPH model is producing accurate flow solutions in this regime. Validating SPH requires performing a series of simulations and comparing these simulation flow solutions to analytical results or numerical results using other methods. This validation study has been greatly aided by the application of the SALSSA framework, which provides capabilities to setup, execute, analyze, and administer these SPH simulations.
International Journal of High Performance Computing and Networking | 2016
Yan Liu; Jared M. Chase; Ian Gorton; Mark J. Rice; Adam S. Wynne
Distributed power system models aim to combine multi-source sensor device data with power grid monitoring and control data to improve the model accuracy. A software platform is essential that coordinates multiple data sources and integrates with the data communication and computation infrastructure. In this paper, we present a service-oriented coordination platform that exposes parallel programs as services and data sources as ports. We model services and ports by extending classes of standard common information model (CIM) for energy management and distribution management. These services are then coordinated based on the typed ports in the context of domain specific power system models. We demonstrate this platform with a use case of deploying distributed state estimation on the IEEE 118 bus model. Our contribution is building a service-oriented platform that provides power engineers an integrated environment for composing distributed power system models, disseminating data and launching high performance computing jobs.
Archive | 2009
Adam S. Wynne; Ian Gorton; Jared M. Chase; Eric G. Stephan
MeDICi (Middleware for Data Intensive Computing) is a platform for developing high performance, distributed streaming analytic and scientific applications. Developed at Pacific Northwest National Laboratory (PNNL), MeDICi has been released under an open source license and is based on enterprise-proven middleware technologies including a widely used Enterprise Service Bus (ESB), the standard Business Process Execution Language (BPEL), and open source message brokers. Wherever possible, we have built on existing open source, standards-based systems and integrated them into a coherent whole by creating simplified graphical programming tools such as a Workflow Designer and an easy to use and well-documented integration API. This software development approach allows us to: avoid re-creating complex service integration and orchestration systems, reap the benefits of continual improvements to the technology base, and focus on creating tools and APIs which allow for the creation of re-usable component-based software components applications and workflows. These aspects have facilitated rapid adoption of the platform within PNNL for demonstration and operational applications. In fact, MeDICi has been used for a wide range of integration projects including two sensor integration applications described later on in this paper. The remainder of this article white paper is organized as follows: Section 2 provides a high-level description of the MeDICi architecture. In Section 3, the open aspects of the API and tool development are highlighted. Section 4 explains system readiness by presenting relevant demonstrations and deployments. Finally documentation and licensing details are provided in Section 5