Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael B. Hurley is active.

Publication


Featured researches published by Michael B. Hurley.


ieee high performance extreme computing conference | 2017

Static graph challenge: Subgraph isomorphism

Siddharth Samsi; Vijay Gadepally; Michael B. Hurley; Michael Jones; Edward K. Kao; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Steven Smith; William S. Song; Diane Staheli; Jeremy Kepner

The rise of graph analytic systems has created a need for ways to measure and compare the capabilities of these systems. Graph analytics present unique scalability difficulties. The machine learning, high performance computing, and visual analytics communities have wrestled with these difficulties for decades and developed methodologies for creating challenges to move these communities forward. The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems. The Subgraph Isomorphism Graph Challenge is a holistic specification with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. Subgraph isomorphism is amenable to both vertex-centric implementations and array-based implementations (e.g., using the Graph-BLAS.org standard). The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed graph challenge is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance have been measured. Specifications, data, and software are publicly available at GraphChallenge.org.


Proceedings of SPIE | 2010

Information theoretic approach for performance evaluation of multi-class assignment systems

Ryan S. Holt; Peter A. Mastromarino; Edward K. Kao; Michael B. Hurley

Multi-class assignment is often used to aid in the exploitation of data in the Intelligence, Surveillance, and Reconnaissance (ISR) community. For example, tracking systems collect detections into tracks and recognition systems classify objects into various categories. The reliability of these systems is highly contingent upon the correctness of the assignments. Conventional methods and metrics for evaluating assignment correctness only convey partial information about the system performance and are usually tied to the specific type of system being evaluated. Recently, information theory has been successfully applied to the tracking problem in order to develop an overall performance evaluation metric. In this paper, the information-theoretic framework is extended to measure the overall performance of any multiclass assignment system, specifically, any system that can be described using a confusion matrix. The performance is evaluated based upon the amount of truth information captured and the amount of false information reported by the system. The information content is quantified through conditional entropy and mutual information computations using numerical estimates of the association probabilities. The end result is analogous to the Receiver Operating Characteristic (ROC) curve used in signal detection theory. This paper compares these information quality metrics to existing metrics and demonstrates how to apply these metrics to evaluate the performance of a recognition system.


ieee high performance extreme computing conference | 2017

Streaming graph challenge: Stochastic block partition

Edward K. Kao; Vijay Gadepally; Michael B. Hurley; Michael Jones; Jeremy Kepner; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Siddharth Samsi; William S. Song; Diane Staheli; Steven Smith

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.


Information Fusion | 2003

An extension of statistical decision theory with information theoretic cost functions to decision fusion: Part I

Michael B. Hurley

Abstract A review of information theory and statistical decision theory has led to the recognition that decisions in statistical decision theory can be interpreted as being determined by the similarity between the distribution of probabilities obtained from measurements and characteristic distributions of probabilities representing the members of the set of decisions. The obscure interpretation was found during a review of statistical decision theory for the special case where the cost function of statistical decision theory is an information theoretic cost function. Additional research has found that the resulting information theoretic decision rule has a number of interesting characteristics that may have previously been recognized in terms of mathematical interest, but until now have not been recognized for their implications for information fusion. Bayesian probability theory has been criticized for problematic changes in decisions when hypotheses and decisions are reorganized to different levels of abstraction, weak justification for the selection of prior probabilities, and the need for all probability density functions to be defined. The characteristics of the information theoretic decision rule show that the decisions are less sensitive to changes in the reorganization of hypothesis and decision sets to different levels of abstraction in comparison to Bayesian probability theory. Extension of the information theoretic rule to a fusion rule (to be provided in a companion paper) will be shown to provide increased justification for the selection of prior probabilities through the adoption of Laplace’s principle of indifference. The criticism of the need for all probability density functions can be partially mitigated by arguing that the hypothesis abstraction levels can be selected so that all the probability density functions may be obtained. Further refutation of the third criticism will require that the assumption that the probability density functions are not definitively known but may be ambiguous as well and will not be pursued as a line of inquiry within the two companion papers.


Archive | 2017

Predicting Team Performance Through Human Behavioral Sensing and Quantitative Workflow Instrumentation

Matthew P. Daggett; Kyle O’Brien; Michael B. Hurley; Daniel Hannon

For decades, the social sciences have provided the foundation for the study of humans interacting with systems; however, sparse, qualitative, and often subjective observations can be insufficient in capturing the complex dynamics of modern sociotechnical enterprises. Technical advances in quantitative system-level and physiological instrumentation have made possible greater objective study of human-system interactions, and joint qualitative-quantitative methodologies are being developed to improve human performance characterization. In this paper we detail how these methodologies were applied to assess teams’ abilities to effectively discover information, collaborate, and make risk-informed decisions during serious games. Statistical models of intra-game performance were developed to determine whether behaviors in specific facets of the gameplay workflow were predictive of analytical performance and game outcomes. A study of over seventy instrumented teams revealed that teams who were more effective at face-to-face communication and system interaction performed better at information discovery tasks and had more accurate game decisions.


international conference on information fusion | 2002

An information theoretic justification for covariance intersection and its generalization

Michael B. Hurley


GALA 2015 Revised Selected Papers of the 4th International Conference on Games and Learning Alliance - Volume 9599 | 2015

An Information Theoretic Approach for Measuring Data Discovery and Utilization During Analytical and Decision-Making Processes

Matthew P. Daggett; Kyle O'Brien; Michael B. Hurley


Archive | 2013

Numerical Estimation of Information Theoretic Measures for Large Data Sets

Michael B. Hurley; Edward K. Kao


arXiv: Distributed, Parallel, and Cluster Computing | 2018

GraphChallenge.org: Raising the Bar on Graph Analytic Performance.

Siddharth Samsi; Vijay Gadepally; Michael B. Hurley; Michael Jones; Edward K. Kao; Sanjeev Mohindra; Paul Monticciolo; Albert Reuther; Steven Smith; William S. Song; Diane Staheli; Jeremy Kepner


Archive | 2009

Simple Real-Time Human Detection using a Single Correlation Filter

John Garofolo; Rama Chellappa; Edward K. Kao; Matthew P. Daggett; Michael B. Hurley; James M. Ferryman; Ali Shahrokni; Weina Ge; Robert T. Collins

Collaboration


Dive into the Michael B. Hurley's collaboration.

Top Co-Authors

Avatar

Edward K. Kao

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Albert Reuther

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Diane Staheli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy Kepner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew P. Daggett

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Jones

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Monticciolo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sanjeev Mohindra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Siddharth Samsi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steven Smith

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge