Jereme N. Haack
Pacific Northwest National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jereme N. Haack.
international conference on connected vehicles and expo | 2013
Jereme N. Haack; Bora A. Akyol; Nathan D. Tenney; Brandon J. Carpenter; Richard M. Pratt; Thomas E. Carroll
The VOLTTRON™ platform provides a secure environment for the deployment of intelligent applications in the Smart Grid. The platforms design is based on the needs of control applications running on small form factor devices, namely security and resource guarantees. Services such as resource discovery, secure agent mobility, and interacting with smart and legacy devices are provided by the platform to ease the development of control applications and accelerate their deployment. VOLTTRON has been demonstrated in several different domains that influenced and enhanced its capabilities. This paper will discuss the features of VOLTTRON and highlight its usage to coordinate electric vehicle charging with home energy usage.
international conference on software engineering | 2004
Ian Gorton; Jereme N. Haack
Understanding an applications functional and non-functional requirements is normally seen as essential for developing a robust product suited to client needs. This paper describes our experiences in a project that, by necessity, commenced well before concrete client requirements could be known. After a first version of the application was successfully released, emerging requirements forced an evolution of the application architecture. The key reasons for this are explained, along with the architectural strategies and software engineering practices that were adopted. The resulting application architecture is highly flexible, modifiable and scalable, and therefore should provide a solid foundation for the duration of the applications lifetime.
international conference on information technology: new generations | 2011
Jereme N. Haack; Glenn A. Fink; Wendy M. Maiden; A. David McKinnon; Steven J. Templeton; Errin W. Fulp
We describe a swarming-agent-based, mixed initiative approach to infrastructure defense where teams of humans and software agents defend cooperating organizations in tandem by sharing insights and solutions without violating proprietary boundaries. The system places human administrators at the appropriate level: where they provide system guidance while lower-level agents carry out tasks humans are unable to perform quickly enough to mitigatetodays security threats. Cooperative Infrastructure Defense, or CID, uses our ant-based approach to enable dialogue between humans and agents to foster a collaborative problem solving environment, to increase human situational awareness and to influence using visualization and shared control. We discuss theoretical implementation characteristics along with results from recent proof-of-concept implementations.
International Workshop on Software Engineering for Large-Scale Multi-agent Systems | 2003
Ian Gorton; Jereme N. Haack; David McGee; Andrew J. Cowell; Olga Kuchar; Judi Thomson
Research and development organizations are constantly evaluating new technologies in order to implement the next generation of advanced applications. At Pacific Northwest National Laboratory, agent technologies are perceived as an approach that can provide a competitive advantage in the construction of highly sophisticated software systems in a range of application areas. To determine the sophistication, utility, performance, and other critical aspects of such systems, a project was instigated to evaluate three candidate agent toolkits. This paper reports on the outcomes of this evaluation, the knowledge accumulated from carrying out this project, and provides insights into the capabilities of the agent technologies evaluated.
enterprise distributed object computing | 2003
Ian Gorton; Justin Almquist; Nick Cramer; Jereme N. Haack; Mark Hoza
Large-scale information processing environments must rapidly search through massive streams of raw data to locate useful information. These data streams contain textual and numeric data items, and may be highly structured or mostly freeform text. This project aims to create a high performance and scalable engine for locating relevant content in data streams. Based on the J2EE Java Messaging Service (JMS), the content-based messaging (CBM) engine provides highly efficient message formatting and filtering. This paper describes the design of the CBM engine, and presents empirical results that compare the performance with a standard JMS to demonstrate the performance improvements that are achieved.
Information Visualization | 2006
Andrew J. Cowell; Michelle L. Gregory; Joseph R. Bruce; Jereme N. Haack; Douglas V. Love; Stuart J. Rose; Adrienne H. Andrew
In this paper, we discuss the efforts underway at the Pacific Northwest National Laboratory in understanding the dynamics of multi-party discourse across a number of communication modalities, such as email, instant messaging traffic and meeting data. Two prototype systems are discussed. The Conversation Analysis Tool (ChAT) is an experimental test-bed for the development of computational linguistic components and enables users to easily identify topics or persons of interest within multi-party conversations, including who talked to whom, when, the entities that were discussed, etc. The Retrospective Analysis of Communication Events (RACE) prototype, leveraging many of the ChAT components, is an application built specifically for knowledge workers and focuses on merging different types of communication data so that the underlying message can be discovered in an efficient, timely fashion.
international midwest symposium on circuits and systems | 2011
Michael B. Crouse; Jacob L. White; Errin W. Fulp; Kenneth S. Berenhaut; Glenn A. Fink; Jereme N. Haack
The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Host-based solutions like virus scanners and IDS suffer similar issues that are compounded when enterprises try to monitor them in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtual colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller, managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.
visual analytics science and technology | 2009
Mark A. Whiting; Chris North; Alex Endert; Jean Scholtz; Jereme N. Haack; Caroline F. Varley; James J. Thomas
The IEEE Visual Analytics Science and Technology (VAST) Symposium has held a contest each year since its inception in 2006. These events are designed to provide visual analytics researchers and developers with analytic challenges similar to those encountered by professional information analysts. The VAST contest has had an extended life outside of the symposium, however, as materials are being used in universities and other educational settings, either to help teachers of visual analytics-related classes or for student projects. We describe how we develop VAST contest datasets that results in products that can be used in different settings and review some specific examples of the adoption of the VAST contest materials in the classroom. The examples are drawn from graduate and undergraduate courses at Virginia Tech and from the Visual Analytics “Summer Camp” run by the National Visualization and Analytics Center in 2008. We finish with a brief discussion on evaluation metrics for education.
ASME 2011 5th International Conference on Energy Sustainability, Parts A, B, and C | 2011
Bora A. Akyol; Jereme N. Haack; Cody W. Tews; Brandon J. Carpenter; Anand V. Kulkarni; Philip A. Craig
The number of sensors connected to the electric power system is expected to grow by several orders of magnitude by 2020. However, the information networks which will transmit and analyze the resulting data are ill-equipped to handle the resulting volume with reliable real-time delivery. Without the ability to manage and use this data, deploying sensors such as phasor measurement units in the transmission system and smart meters in the distribution system will not result in the desired improvements in the power grid. The ability to exploit the massive data being generated by new sensors would allow for more efficient flow of power and increased survivability of the grid. Additionally, the power systems of today are not capable of managing two-way power flow to accommodate distributed generation capabilities due to concerns about system stability and lack of system flexibility. The research that we are performing creates a framework to add “intelligence” to the sensors and actuators being used today in the electric power system. Sensors that use our framework will be capable of sharing information through the various layers of the electric power system to enable two-way information flow to help facilitate integration of distributed resources. Several techniques are considered including use of peer-to-peer communication as well as distributed agents. Specifically, we will have software agents operating on systems with differing levels of computing power. The agents will cooperate to bring computation closer to the data. The types of computation considered are control decisions, data analysis, and demand/response. When paired with distributed autonomous controllers, the sensors form the basis of an information system that supports deployment of both micro-grids and islanding. Our efforts in the area of developing the next generation information infrastructure for sensors in the power grid form the basis of a broader strategy that enables better integration of distributed generation, distribution automation systems and decentralized control (micro-grids).Copyright
Archive | 2015
Bora A. Akyol; Jereme N. Haack; Brandon J. Carpenter; Srinivas Katipamula; Robert G. Lutes; George Hernandez
Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.