Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael R. Grimaila is active.

Publication


Featured researches published by Michael R. Grimaila.


vlsi test symposium | 1999

REDO-random excitation and deterministic observation-first commercial experiment

Michael R. Grimaila; Sooryong Lee; Jennifer Dworak; Kenneth M. Butler; B. Stewart; Hari Balachandran; B. Houchins; V. Mathur; Jaehong Park; Li-C. Wang; M.R. Mercer

For many years, non-target detection experiments have been simulated by using AND/OR bridges or gross delay faults as surrogates. For example, the defective part level can be estimated based upon surrogate detection when test patterns target stuck-at faults in the circuit. For the first time, test pattern generation techniques that attempt to maximize non-target defect detection have been used to test a real, 100% scanned, commercial chip consisting of 75 K logic gates. In this experiment, the defective part level for REDO-based patterns was 1,288 parts per million lower than that achieved by DC stuck-at based patterns generated using todays state of the art tools and techniques.


IEEE Design & Test of Computers | 2001

Defect-oriented testing and defective-part-level prediction

Jennifer Dworak; J.D. Wicker; Sooryong Lee; Michael R. Grimaila; M.R. Mercer; Kenneth M. Butler; B. Stewart; Li-C. Wang

After an integrated circuit (IC) design is complete, but before first silicon arrives from the manufacturing facility, the design team prepares a set of test patterns to isolate defective parts. Applying this test pattern set to every manufactured part reduces the fraction of defective parts erroneously sold to customers as defect-free parts. This fraction is referred to as the defect level (DL). However, many IC manufacturers quote defective part level, which is obtained by multiplying the defect level by one million to give the number of defective parts per million. Ideally, we could accurately estimate the defective part level by analyzing the circuit structure, the applied test-pattern set, and the manufacturing yield. If the expected defective part level exceeded some specified value, then either the test pattern set or (in extreme cases) the design could be modified to achieve adequate quality. Although the IC industry widely accepts stuck-at fault detection as a key test-quality figure of merit, it is nevertheless necessary to detect other defect types seen in real manufacturing environments. A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.


Communications of The Ais | 2007

Management of Information Security: Challenges and Research Directions

Joobin Choobineh; Gurpreet Dhillon; Michael R. Grimaila; Jackie Rees

Over the past decade management of information systems security has emerged to be a challenging task. Given the increased dependence of businesses on computer-based systems and networks, vulnerabilities of systems abound. Clearly, exclusive reliance on either the technical or the managerial controls is inadequate. Rather, a multifaceted approach is needed. In this paper, based on a panel presented at the 2007 Americas Conference on Information Systems held in Keystone, Colorado, we provide examples of failures in information security, identify challenges for the management of information systems security, and make a case that these challenges require new theory development via examining reference disciplines. We identify these disciplines, recognize applicable research methodologies, and discuss desirable properties of applicable theories.


design, automation, and test in europe | 2002

A New ATPG Algorithm to Limit Test Set Size and Achieve Multiple Detections of All Faults

Sooryong Lee; B. Cobb; Jennifer Dworak; Michael R. Grimaila; M.R. Mercer

Deterministic observation and random excitation of fault sites during the ATPG process dramatically reduces the overall defective part level. However, multiple observations of each fault site lead to increased test set size and require more tester memory. In this paper we propose a new ATPG algorithm to find a near-minimal test pattern set that detects faults multiple times and achieves excellent defective part level. This greedy approach uses 3-value fault simulation to estimate the potential value of each vector candidate at each stage of ATPG. The result shows generation of a close to minimal vector set is possible only using dynamic compaction techniques in most cases. Finally, a systematic method to trade-off between defective part level and test size is also presented.


cyber security and information intelligence research workshop | 2009

Towards insider threat detection using web server logs

Justin Myers; Michael R. Grimaila; Robert F. Mills

Malicious insiders represent one of the most difficult categories of threats an organization must consider when mitigating operational risk. Insiders by definition possess elevated privileges; have knowledge about control measures; and may be able to bypass security measures designed to prevent, detect, or react to unauthorized access. In this paper, we discuss our initial research efforts focused on the detection of malicious insiders who exploit internal organizational web servers. The objective of the research is to apply lessons learned in network monitoring domains and enterprise log management to investigate various approaches for detecting insider threat activities using standardized tools and a common event expression framework.


international test conference | 2000

Enhanced DO-RE-ME based defect level prediction using defect site aggregation-MPG-D

Jennifer Dworak; Michael R. Grimaila; Sooryong Lee; Li-C. Wang; M.R. Mercer

Predicting the final value of the defective part level after the application of a set of test vectors is not a simple problem. In order for the defective part level to decrease, both the excitation and observation of defects must occur. This research shows that the probability of exciting an as yet undetected defect does indeed decrease exponentially as the number of observations increases. In addition, a new defective part level model is proposed which accurately predicts the final defective part level (even at high fault coverages) for several benchmark circuits and which continues to provide good predictions even as changes are made an the set of test patterns applied.


computer and communications security | 2008

A survey of state-of-the-art in anonymity metrics

Douglas J. Kelly; Richard A. Raines; Michael R. Grimaila; Rusty O. Baldwin; Barry E. Mullins

Anonymization enables organizations to protect their data and systems from a diverse set of attacks and preserve privacy; however, in the area of anonymized network data, few, if any, are able to precisely quantify how anonymized their information is for any particular dataset. Indeed, recent research indicates that many anonymization techniques leak some information. An ability to confidently measure this information leakage and any changes in anonymity levels plays a crucial role in facilitating the free-flow of cross-organizational network data sharing and promoting wider adoption of anonyimzation techniques. Fortunately, multiple methods of analyzing anonymity exist. Typical approaches use simple quantifications and probabilistic models; however, to the best of our knowledge, only one network data anonymization metric has been proposed. More importantly, no one-stop-shop paper exists that comprehensively surveys this area for other candidate measures; therefore, this paper explores the state-of-the-art of anonymity metrics. The objective is to provide a macro-level view of the systematic analysis of anonymity preservation, degradation, or elimination for data anonymization as well as network communciations anonymization.


IEEE Communications Surveys and Tutorials | 2012

Exploring Extant and Emerging Issues in Anonymous Networks: A Taxonomy and Survey of Protocols and Metrics

Douglas J. Kelly; Richard A. Raines; Rusty O. Baldwin; Michael R. Grimaila; Barry E. Mullins

The desire to preserve privacy in cyberspace drives research in the area of anonymous networks. Any entity operating in cyberspace is susceptible to debilitating cyber attacks. As part of the National Strategy to Secure Cyberspace, the United States acknowledges that the speed and anonymity of cyber attacks makes distinguishing among the actions of terrorists, criminals, and nation states difficult. Indeed, todays Internet is an incredibly effective, uncontrolled weapon for eavesdropping and spying. Therefore, anonymity and privacy are increasingly important issues. A plethora of existing or proposed anonymous networks achieve diverse levels of anonymity against a variety of adversarial attacks. However, no known taxonomy provides a comprehensive classification of the varied set of wired, wireless, and hybrid anonymous communications networks. We develop a novel cubic taxonomy to facilitate the systematic definition and classification of anonymity in anonymous communications networks. Three key anonymity components: anonymity property, adversary capability, and network type are thoroughly explored. More importantly, an in-depth description of a new tree-based taxonomy for the state-of-the-art in wireless anonymous protocols is offered. For completeness, a tree-based taxonomy of wired and hybrid anonymous protocols are also provided. Lastly, several evolving anonymity metrics which quantify anonymity preservation, degradation, and elimination in existing and future anonymous networks are examined. Hence, this paper explores extant and emerging issues in anonymous networks via an intuitive taxonomy and surveys anonymous protocols and quantifiable metrics essential for any entity determined to assure anonymity and preserve privacy in cyberspace against an adversary.


Computers & Security | 2012

Malware target recognition via static heuristics

Thomas E. Dube; Richard A. Raines; Gilbert L. Peterson; Kenneth W. Bauer; Michael R. Grimaila; Steven K. Rogers

Organizations increasingly rely on the confidentiality, integrity and availability of their information and communications technologies to conduct effective business operations while maintaining their competitive edge. Exploitation of these networks via the introduction of undetected malware ultimately degrades their competitive edge, while taking advantage of limited network visibility and the high cost of analyzing massive numbers of programs. This article introduces the novel Malware Target Recognition (MaTR) system which combines the decision tree machine learning algorithm with static heuristic features for malware detection. By focusing on contextually important static heuristic features, this research demonstrates superior detection results. Experimental results on large sample datasets demonstrate near ideal malware detection performance (99.9+% accuracy) with low false positive (8.73e-4) and false negative rates (8.03e-4) at the same point on the performance curve. Test results against a set of publicly unknown malware, including potential advanced competitor tools, show MaTRs superior detection rate (99%) versus the union of detections from three commercial antivirus products (60%). The resulting model is a fine granularity sensor with potential to dramatically augment cyberspace situation awareness.


computational intelligence and security | 2007

Towards an Information Asset-Based Defensive Cyber Damage Assessment Process

Michael R. Grimaila; Larry W. Fortson

The use of computers and communication technologies to enhance command and control (C2) processes has yielded enormous benefits in military operations. Commanders are able to make higher quality decisions by accessing a greater number of information resources, obtaining more frequent updates from their information resources, and by correlation between, and across, multiple information resources to reduce uncertainty in the battlespace. However, these benefits do not come without a cost. The reliance on technology results in significant operational risk that is often overlooked and is frequently underestimated. In this research-in-progress paper, we discuss our initial findings in our efforts to improve the defensive cyber battle damage assessment process within US Air Force networks. We have found that the lack of a rigorous, well-documented, information asset-based risk management process results in significant uncertainty and delay when assessing the impact of an information incident.

Collaboration


Dive into the Michael R. Grimaila's collaboration.

Top Co-Authors

Avatar

Robert F. Mills

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Douglas D. Hodson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Logan O. Mailloux

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Raines

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gilbert L. Peterson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Barry E. Mullins

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rusty O. Baldwin

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Colin V. McLaughlin

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

John M. Colombi

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge