Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brett J. Borghetti is active.

Publication


Featured researches published by Brett J. Borghetti.


systems man and cybernetics | 2012

A Review of Anomaly Detection in Automated Surveillance

Angela A. Sodemann; Matthew P. Ross; Brett J. Borghetti

As surveillance becomes ubiquitous, the amount of data to be processed grows along with the demand for manpower to interpret the data. A key goal of surveillance is to detect behaviors that can be considered anomalous. As a result, an extensive body of research in automated surveillance has been developed, often with the goal of automatic detection of anomalies. Research into anomaly detection in automated surveillance covers a wide range of domains, employing a vast array of techniques. This review presents an overview of recent research approaches on the topic of anomaly detection in automated surveillance. The reviewed studies are analyzed across five aspects: surveillance target, anomaly definitions and assumptions, types of sensors used and the feature extraction processes, learning methods, and modeling algorithms.


IEEE Communications Surveys and Tutorials | 2015

A Survey of Distance and Similarity Measures Used Within Network Intrusion Anomaly Detection

David J. Weller-Fahy; Brett J. Borghetti; Angela A. Sodemann

Anomaly detection (AD) use within the network intrusion detection field of research, or network intrusion AD (NIAD), is dependent on the proper use of similarity and distance measures, but the measures used are often not documented in published research. As a result, while the body of NIAD research has grown extensively, knowledge of the utility of similarity and distance measures within the field has not grown correspondingly. NIAD research covers a myriad of domains and employs a diverse array of techniques from simple k-means clustering through advanced multiagent distributed AD systems. This review presents an overview of the use of similarity and distance measures within NIAD research. The analysis provides a theoretical background in distance measures and a discussion of various types of distance measures and their uses. Exemplary uses of distance measures in published research are presented, as is the overall state of the distance measure rigor in the field. Finally, areas that require further focus on improving the distance measure rigor in the NIAD field are presented.


IEEE Transactions on Smart Grid | 2011

Reputation-Based Trust for a Cooperative Agent-Based Backup Protection Scheme

John F. Borowski; Kenneth M. Hopkinson; Jeffrey W. Humphries; Brett J. Borghetti

This paper explores integrating a reputation-based trust mechanism with an agent-based backup protection system to help protect against malicious or byzantine failures. A distributed cooperative trust system has the potential to add an additional layer of protection designed to operate with greater autonomy. This trust component enables the agents in the system to make assessments using an estimate of the trustworthiness of cooperating protection agents based on their responsiveness and the consistency of their responses when compared with their peers. Results illustrate the improved decision-making capability of agents who incorporate this cooperative trust method in the presence of failures in neighboring relays.


Pattern Recognition Letters | 2017

Deep long short-term memory structures model temporal dependencies improving cognitive workload estimation

Ryan G. Hefron; Brett J. Borghetti; James C. Christensen; Christine M. Schubert Kabban

A deep LSTM architecture is proposed to improve cross-day EEG feature stationarity.Mean, variance, skewness, and kurtosis input features are statistically evaluated.Models account for temporal dependencies in brain activity data, improving results.Achieves average classification accuracy of 93.0% using a deep LSTM architecture.A 59% reduction in error compared to best previously published results for dataset. Using deeply recurrent neural networks to account for temporal dependence in electroencephalograph (EEG)-based workload estimation is shown to considerably improve day-to-day feature stationarity resulting in significantly higher accuracy (p < .0001) than classifiers which do not consider the temporal dependence encoded within the EEG time-series signal. This improvement is demonstrated by training several deep Recurrent Neural Network (RNN) models including Long Short-Term Memory (LSTM) architectures, a feedforward Artificial Neural Network (ANN), and Support Vector Machine (SVM) models on data from six participants who each perform several Multi-Attribute Task Battery (MATB) sessions on five separate days spread out over a month-long period. Each participant-specific classifier is trained on the first four days of data and tested using the fifths. Average classification accuracy of 93.0% is achieved using a deep LSTM architecture. These results represent a 59% decrease in error compared to the best previously published results for this dataset. This study additionally evaluates the significance of new features: all combinations of mean, variance, skewness, and kurtosis of EEG frequency-domain power distributions. Mean and variance are statistically significant features, while skewness and kurtosis are not. The overall performance of this approach is high enough to warrant evaluation for inclusion in operational systems.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Coordinated Displays to Assist Cyber Defenders

Alex Z. Vieane; Gregory J. Funke; Vincent Mancuso; Eric T. Greenlee; Gregory Dye; Brett J. Borghetti; Brent Miller; Lauren Menke; Rebecca Brown

Cyber network analysts must gather evidence from multiple sources and ultimately decide whether or not suspicious activity represents a threat to network security. Information relevant to this task is usually presented in an uncoordinated fashion, meaning analysts must manually correlate data across multiple databases. The current experiment examined whether analyst performance efficiency would be improved by coordinated displays, i.e., displays that automatically link relevant information across databases. We found that coordinated displays nearly doubled performance efficiency, in contrast to the standard uncoordinated displays, and coordinated displays resulted in a modest increase in threat detections. These results demonstrate that the benefits of coordinated displays are significant enough to recommend their inclusion in future cyber defense software.


international conference on augmented cognition | 2015

Objective-Analytical Measures of Workload – the Third Pillar of Workload Triangulation?

Christina Rusnock; Brett J. Borghetti; Ian McQuaid

The ability to assess operator workload is important for dynamically allocating tasks in a way that allows efficient and effective goal completion. For over fifty years, human factors professionals have relied upon self-reported measures of workload. However, these subjective-empirical measures have limited use for real-time applications because they are often collected only at the completion of the activity. In contrast, objective-empirical measurements of workload, such as physiological data, can be recorded continuously, and provide frequently-updated information over the course of a trial. Linking the low-sample-rate subjective-empirical measurement to the high-sample-rate objective-empirical measurements poses a significant challenge. While the series of objective-empirical measurements could be down–sampled or averaged over a longer time period to match the subjective-empirical sample rate, this process discards potentially relevant information, and may produce meaningless values for certain types of physiological data. This paper demonstrates the technique of using an objective-analytical measurement produced by mathematical models of workload to bridge the gap between subjective-empirical and objective-empirical measures. As a proof of concept, we predicted operator workload from physiological data using VACP, an objective-analytical measure, which was validated against NASA-TLX scores. Strong predictive results pave the way to use the objective-empirical measures in real-time augmentation (such as dynamic task allocation) to improve operator performance.


TADA/AMEC'06 Proceedings of the 2006 AAMAS workshop and TADA/AMEC 2006 conference on Agent-mediated electronic commerce: automated negotiation and strategy design for electronic markets | 2006

A market-pressure-based performance evaluator for TAC-SCM

Brett J. Borghetti; Eric Sodomka; Maria L. Gini; John Collins

We propose a novel method to characterize the performance of autonomous agents in the Trading Agent Competition for Supply Chain Management (TAC-SCM). We create a suite of testing tools that reduce the variability of TAC-SCM games, make them replayable, and generate specific market conditions under which autonomous trading agents can be tested. Using these tools, we show how developers can inspect their agents to reveal and correct undesirable behaviors that might otherwise have gone undiscovered. We also discuss how these tools can be used to improve overall trading agent performance in future competitions


hawaii international conference on system sciences | 2015

Analysis of Implementations to Secure Git for Use as an Encrypted Distributed Version Control System

Russell G. Shirey; Kenneth M. Hopkinson; Kyle E. Stewart; Douglas D. Hodson; Brett J. Borghetti

This paper analyzes two existing methods for securing Git repositories, Git-encrypt and Git-crypt, by comparing their performance relative to the default Git implementation. Securing a Git repository is necessary when the repository contains sensitive or restricted data. This allows the repository to be stored on any third-party cloud provider with assurance that even if the repository data is leaked, it will remain secure. The analysis of current Git encryption methods is done through a series of tests that examines the performance trade-offs made for added security. This performance is analyzed in terms of size, time, and functionality using three different Git repositories of varying size. The three experiments include initializing and populating a repository, compressing a repository through garbage collection, and modifying then committing files to the repository. The results show that Git maintains functionality with each of these two encryption implementations at the cost of time and repository size. The time increase is found to be a factor ranging from 14 to 38 times the original time. The size increase over multiple commits of edited files is found to increase linearly proportional to the working set of files.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Improving Model Cross-Applicability for Operator Workload Estimation

Andrew M. Smith; Brett J. Borghetti; Christina Rusnock

When operators are overwhelmed, judicious employment of automation can help. Ideally, an adaptive system which can accurately estimate current operator workload can more effectively employ automation. Supervised machine learning models can be trained to estimate workload from operator-state parameters sensed by on-body sensors which, for example, collect heart rate or brain activity information. Unfortunately, estimating operator workload using trained models is limited: using a model trained in one context can yield poor estimation of workload in another. This research examines the efficacy of using two regression-tree alternatives (random forests and pruned regression trees) to decrease workload estimation cross-application error. The study is conducted for a remotely piloted aircraft simulation under two context-switch scenarios 1) across human operators and 2) across task conditions. While cross-task results were inconclusive, both algorithms significantly reduced cross-application error in estimating workload across operators, and random forests performed best in cross-operator applicability.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Cyber Human Research from the Cyber Operator’s View

Brett J. Borghetti; Gregory J. Funke; Robert Pastel; Robert S. Gutzwiller

Historically, cyber security research has largely focused on improving the tools that cyber operators use in their daily jobs. As the main focus is on these tools, the cyber operator has been an afterthought in the human-machine-team. Human research in the cyber operation domain is difficult due to operations tempo and the often-sensitive information associated with the operational domain. As a result, human research in this field is under-represented, and it is a great challenge to better understand and address the needs of cyber operators. This panel brings cyber operators to the human factors community at the annual meeting to discuss their operational workflow, needs, and desires – at a publically consumable level. Our goal is to bridge the notable gap and open opportunities for human factors research in cyber security.

Collaboration


Dive into the Brett J. Borghetti's collaboration.

Top Co-Authors

Avatar

Christina Rusnock

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gilbert L. Peterson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph J. Giametta

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kenneth M. Hopkinson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ryan G. Hefron

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory J. Funke

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey W. Humphries

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John F. Borowski

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge