Adam S. Wynne
Pacific Northwest National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adam S. Wynne.
visualization for computer security | 2010
Daniel M. Best; Shawn J. Bohn; Douglas V. Love; Adam S. Wynne; William A. Pike
Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts take proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.
working ieee/ifip conference on software architecture | 2008
Ian Gorton; Adam S. Wynne; Justin Almquist; Jack Chatterton
Building high performance analytical applications for data streams generated from sensors is a challenging software engineering problem. Such applications typically comprise a complex pipeline of processing components that capture, transform and analyze the incoming data stream. In addition, applications must provide high throughput, be scalable and easily modifiable so that new analytical components can be added with minimum effort. In this paper we describe the MeDICi integration framework (MIF), which is a middleware platform we have created to address these challenges. The MIF extends an open source messaging platform with a component-based API for integrating components into analytical pipelines. We describe the features and capabilities of the MIF, and show how it has been used to build a production analytical application for detecting cyber security attacks. The application was composed from multiple independently developed components using several different programming languages. The resulting application was able to process network sensor traffic in real time and provide insightful feedback to network analysts as soon as potential attacks were recognized.
ieee congress on services | 2009
Jared M. Chase; Ian Gorton; Chandrika Sivaramakrishnan; Justin Almquist; Adam S. Wynne; George Chin; Terence Critchlow
Scientific applications are often structured as workflows that execute a series of interdependent, distributed software modules to analyze large data sets. The order of execution of the tasks in a workflow is commonly controlled by complex scripts, which over time become difficult to maintain and evolve. In this paper, we describe how we have integrated the Kepler scientific workflow platform with the MeDICi Integration Framework, which has been specifically designed to provide a standards-based, lightweight and flexible integration platform. The MeDICi technology provides a scalable, component-based architecture that efficiently handles integration with heterogeneous, distributed software systems. This paper describes the MeDICi Integration Framework and the mechanisms we used to integrate MeDICi components with Kepler workflow actors. We illustrate this solution with a workflow application for an atmospheric sciences application. The resulting solution promotes a strong separation of concerns, simplifying the Kepler workflow description and promoting the creation of a reusable collection of components available for other workflow applications in this domain.
component based software engineering | 2009
Ian Gorton; Jared M. Chase; Adam S. Wynne; Justin Almquist; Alan R. Chappell
Scientific applications are often structured as workflows that execute a series of distributed software modules to analyze large data sets. Such workflows are typically constructed using general-purpose scripting languages to coordinate the execution of the various modules and to exchange data sets between them. While such scripts provide a cost-effective approach for simple workflows, as the workflow structure becomes complex and evolves, the scripts quickly become complex and difficult to modify. This makes them a major barrier to easily and quickly deploying new algorithms and exploiting new, scalable hardware platforms. In this paper, we describe the MeDICi Workflow technology that is specifically designed to reduce the complexity of workflow application development, and to efficiently handle data intensive workflow applications. MeDICi integrates standard component-based and service-based technologies, and employs an efficient integration mechanism to ensure large data sets can be efficiently processed. We illustrate the use of MeDICi with a climate data processing example that we have built, and describe some of the new features we are creating to further enhance MeDICi Workflow applications.
intelligence and security informatics | 2013
Nathan A. Baker; Jonathan L. Barr; George T. Bonheyo; Cliff Joslyn; Kannan Krishnaswami; Mark E. Oxley; Rich Quadrel; Landon H. Sego; Mark F. Tardiff; Adam S. Wynne
In its most general form, a signature is a unique or distinguishing measurement, pattern, or collection of data that identifies a phenomenon (object, action, or behavior) of interest. The discovery of signatures is an important aspect of a wide range of disciplines from basic science to national security for the rapid and efficient detection and/or prediction of phenomena. Current practice in signature discovery is typically accomplished by asking domain experts to characterize and/or model individual phenomena to identify what might compose a useful signature. What is lacking is an approach that can be applied across a broad spectrum of domains and information sources to efficiently and robustly construct candidate signatures, validate their reliability, measure their quality, and overcome the challenge of detection - all in the face of dynamic conditions, measurement obfuscation, and noisy data environments. Our research has focused on the identification of common elements of signature discovery across application domains and the synthesis of those elements into a systematic process for more robust and efficient signature development. In this way, a systematic signature discovery process lays the groundwork for leveraging knowledge obtained from signatures to a particular domain or problem area, and, more generally, to problems outside that domain. This paper presents the initial results of this research by discussing a mathematical framework for representing signatures and placing that framework in the context of a systematic signature discovery process. Additionally, the basic steps of this process are described with details about the methods available to support the different stages of signature discovery, development, and deployment.
Proceedings of the 2012 workshop on Domain-specific modeling | 2012
Ferosh Jacob; Jeff Gray; Adam S. Wynne; Yan Liu; Nathan A. Baker
Domain-agnostic signature discovery entails study across multiple scientific disciplines. The cross-disciplinary nature and breadth of this work requires that existing executable applications be integrated with new capabilities into workflows, representing a wide range of user tasks. An algorithm may be written in multiple programming languages for various hardware platforms, and so workflow composition requires integrating executables from any number of remote hosts. This raises an engineering issue on how to generate web service wrappers for these heterogeneous executables and to compose them into a scientific workflow environment (e.g., Taverna). In this position paper, we summarize our work on two simple Domain-Specific Languages (DSLs) that automate these processes. Our Service Description Language (SDL) describes key elements of a signature discovery service and automatically generates its implementation code. The Workflow Description Language (WDL) describes the pipeline of services and generates deployable artifacts for the Taverna workflow management system. We demonstrate our approach with a real-world workflow composed of services wrapping remote executables.
Computing in Science and Engineering | 2014
Ferosh Jacob; Adam S. Wynne; Yan Liu; Jeff Gray
Domain-agnostic signature discovery supports scientific investigation across domains through algorithm reuse. A new software tool defines two simple domain-specific languages that automate processes that support the reuse of existing algorithms in different workflow scenarios. The tool is demonstrated with a signature discovery workflow composed of services that wrap original scripts running high-performance computing tasks.
information reuse and integration | 2010
Arzu Gosney; Christopher S. Oehmen; Adam S. Wynne; Justin Almquist
Large computing systems including clusters, clouds, and grids, provide high-performance capabilities that can be utilized for scientific applications. As the ubiquity of these systems increases and the scope of analysis performed on them expand, there is a growing need for applications that do not require users to learn the details of high-performance computing, and are flexible and adaptive to accommodate the best time-to-solution. In this paper we introduce a new adaptive capability for the MeDICi middleware and describe the applicability of this design to a scientific workflow application for biology. This adaptive framework provides a programming model for implementing a workflow using high-performance systems and enables the compute capabilities at one site to automatically analyze data being generated at another site. This adaptive design improves overall time-to-solution by moving the data analysis task to the most appropriate resource dynamically, automatically reacting to failures and load fluctuations.
international conference on service oriented computing | 2010
Ian Gorton; Adam S. Wynne; Yan Liu
The pipeline software architecture pattern is commonly used in many application domains to structure a software system. A pipeline comprises a sequence of processing steps that progressively transform data to some desired outputs. As pipeline-based systems are required to handle increasingly large volumes of data and provide high throughput services, simple scripting-based technologies that have traditionally been used for constructing pipelines do not scale. In this paper we describe the MeDICI Integration Framework (MIF), which is specifically designed for building flexible, efficient and scalable pipelines that exploit distributed services as elements of the pipeline. We explain the core runtime and development infrastructures that MIF provides, and demonstrate how MIF has been used in two complex applications to improve performance and modifiability.
2007 Sixth International IEEE Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS'07) | 2007
David A. Thurman; Justin Almquist; Ian Gorton; Adam S. Wynne; Jack Chatterton
Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary COTS middleware technologies. However, in the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery and financial and intelligence analysis exert unique demands on their underlying architectures. These demands make existing COTS integration middleware less suitable for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), an application architecture designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths and weaknesses of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains