Steven Noel
George Mason University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven Noel.
visualization for computer security | 2004
Steven Noel; Sushil Jajodia
We describe a framework for managing network attack graph complexity through interactive visualization, which includes hierarchical aggregation of graph elements. Aggregation collapses non-overlapping subgraphs of the attack graph to single graph vertices, providing compression of attack graph complexity. Our aggregation is recursive (nested), according to a predefined aggregation hierarchy. This hierarchy establishes rules at each level of aggregation, with the rules being based on either common attribute values of attack graph elements or attack graph connectedness. The higher levels of the aggregation hierarchy correspond to higher levels of abstraction, providing progressively summarized visual overviews of the attack graph. We describe rich visual representations that capture relationships among our semantically-relevant attack graph abstractions, and our views support mixtures of elements at all levels of the aggregation hierarchy. While it would be possible to allow arbitrary nested aggregation of graph elements, it is better to constrain aggregation according to the semantics of the network attack problem, i.e., according to our aggregation hierarchy. The aggregation hierarchy also makes efficient automatic aggregation possible. We introduce the novel abstraction of protection domain as a level of the aggregation hierarchy, which corresponds to a fully-connected subgraph (clique) of the attack graph. We avoid expensive detection of attack graph cliques through knowledge of the network configuration, i.e. protection domains are predefined. While significant work has been done in automatically generating attack graphs, this is the first treatment of the management of attack graph complexity for interactive visualization. Overall, computation in our framework has worst-case quadratic complexity, but in practice complexity is greatly reduced because users generally interact with (often negligible) subsets of the attack graph. We apply our framework to a real network, using a software system we have developed for generating and visualizing network attack graphs.
annual computer security applications conference | 2003
Steven Noel; Sushil Jajodia; Brian O'Berry; Michael Jacobs
In-depth analysis of network security vulnerability must consider attacker exploits not just in isolation, but also in combination. The general approach to this problem is to compute attack paths (combinations of exploits), from which one can decide whether a given set of network hardening measures guarantees the safety of given critical resources. We go beyond attack paths to compute actual sets of hardening measures (assignments of initial network conditions) that guarantee the safety of given critical resources. Moreover, for given costs associated with individual hardening measures, we compute assignments that minimize overall cost. By doing our minimization at the level of initial conditions rather than exploits, we resolve hardening irrelevancies and redundancies in a way that cannot be done through previously proposed exploit-level approaches. Also, we use an efficient exploit-dependency representation based on monotonic logic that has polynomial complexity, as opposed to many previous attack graph representations having exponential complexity.
Computer Communications | 2006
Lingyu Wang; Steven Noel; Sushil Jajodia
In defending ones network against cyber attack, certain vulnerabilities may seem acceptable risks when considered in isolation. But an intruder can often infiltrate a seemingly well-guarded network through a multi-step intrusion, in which each step prepares for the next. Attack graphs can reveal the threat by enumerating possible sequences of exploits that can be followed to compromise given critical resources. However, attack graphs do not directly provide a solution to remove the threat. Finding a solution by hand is error-prone and tedious, particularly for larger and less secure networks whose attack graphs are overly complicated. In this paper, we propose a solution to automate the task of hardening a network against multi-step intrusions. Unlike existing approaches whose solutions require removing exploits, our solution is comprised of initially satisfied conditions only. Our solution is thus more enforceable, because the initial conditions can be independently disabled, whereas exploits are usually consequences of other exploits and hence cannot be disabled without removing the causes. More specifically, we first represent given critical resources as a logic proposition of initial conditions. We then simplify the proposition to make hardening options explicit. Among the options we finally choose solutions with the minimum cost. The key improvements over the preliminary version of this paper include a formal framework of the minimum network hardening problem, and an improved one-pass algorithm in deriving the logic proposition while avoiding logic loops.
annual computer security applications conference | 2004
Steven Noel; Eric B. Robertson; Sushil Jajodia
We map intrusion events to known exploits in the network attack graph, and correlate the events through the corresponding attack graph distances. From this, we construct attack scenarios, and provide scores for the degree of causal correlation between their constituent events, as well as an overall relevancy score for each scenario. While intrusion event correlation and attack scenario construction have been previously studied, this is the first treatment based on association with network attack graphs. We handle missed detections through the analysis of network vulnerability dependencies, unlike previous approaches that infer hypothetical attacks. In particular, we quantify lack of knowledge through attack graph distance. We show that low-pass signal filtering of event correlation sequences improves results in the face of erroneous detections. We also show how a correlation threshold can be applied for creating strongly correlated attack scenarios. Our model is highly efficient, with attack graphs and their exploit distances being computed offline. Online event processing requires only a database lookup and a small number of arithmetic operations, making the approach feasible for real-time applications.
annual computer security applications conference | 2005
Steven Noel; Sushil Jajodia
We apply adjacency matrix clustering to network attack graphs for attack correlation, prediction, and hypothesizing. We self-multiply the clustered adjacency matrices to show attacker reachability across the network for a given number of attack steps, culminating in transitive closure for attack prediction over all possible number of steps. This reachability analysis provides a concise summary of the impact of network configuration changes on the attack graph. Using our framework, we also place intrusion alarms in the context of vulnerability-based attack graphs, so that false alarms become apparent and missed detections can be inferred. We introduce a graphical technique that shows multiple-step attacks by matching rows and columns of the clustered adjacency matrix. This allows attack impact/responses to be identified and prioritized according to the number of attack steps to victim machines, and allows attack origins to be determined. Our techniques have quadratic complexity in the size of the attack graph
annual computer security applications conference | 2002
Ronald W. Ritchey; Brian O'Berry; Steven Noel
The individual vulnerabilities of hosts on a network can be combined by an attacker to gain access that would not be possible if the hosts were not interconnected. Currently available tools report vulnerabilities in isolation and in the context of individual hosts in a network. Topological vulnerability analysis (TVA) extends this by searching for sequences of interdependent vulnerabilities, distributed among the various network hosts. Model checking has been applied to the analysis of this problem with some interesting initial results. However previous efforts did not take into account a realistic representation of network connectivity. These models were enough to demonstrate the usefulness of the model checking approach but would not be sufficient to analyze real-world network security problems. This paper presents a modem of network connectivity at multiple levels of the TCP/IP stack appropriate for use in a model checker. With this enhancement, it is possible to represent realistic networks including common network security devices such as firewalls, filtering routers, and switches.
IEEE Transactions on Dependable and Secure Computing | 2014
Lingyu Wang; Sushil Jajodia; Anoop Singhal; Pengsu Cheng; Steven Noel
By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge.
visualization for computer security | 2005
Steven Noel; Michael Jacobs; Pramod Kalapa; Sushil Jajodia
While efficient graph-based representations have been developed for modeling combinations of low-level network attacks, relatively little attention has been paid to effective techniques for visualizing such attack graphs. This paper describes a number of new attack graph visualization techniques, each having certain desirable properties and offering different perspectives for solving different kinds of problems. Moreover, the techniques we describe can be applied not only separately, but can also be combined into coordinated attack graph views. We apply improved visual clustering to previously described network protection domains (attack graph cliques), which reduces graph complexity and makes the overall attack flow easier to understand. We also visualize the attack graph adjacency matrix, which shows patterns of network attack while avoiding the clutter usually associated with drawing large graphs. We show how the attack graph adjacency matrix concisely conveys the impact of network configuration changes on attack graphs. We also describe a novel attack graph filtering technique based on the interactive navigation of a hierarchy of attack graph constraints. Overall, our techniques scale quadratically with the number of machines in the attack graph.
Journal of Network and Systems Management | 2008
Steven Noel; Sushil Jajodia
We optimally place intrusion detection system (IDS) sensors and prioritize IDS alerts using attack graph analysis. We begin by predicting all possible ways of penetrating a network to reach critical assets. The set of all such paths through the network constitutes an attack graph, which we aggregate according to underlying network regularities, reducing the complexity of analysis. We then place IDS sensors to cover the attack graph, using the fewest number of sensors. This minimizes the cost of sensors, including effort of deploying, configuring, and maintaining them, while maintaining complete coverage of potential attack paths. The sensor-placement problem we pose is an instance of the NP-hard minimum set cover problem. We solve this problem through an efficient greedy algorithm, which works well in practice. Once sensors are deployed and alerts are raised, our predictive attack graph allows us to prioritize alerts based on attack graph distance to critical assets.
military communications conference | 2011
Sushil Jajodia; Steven Noel; Pramod Kalapa; Massimiliano Albanese; John Williams
The cyber situational awareness of an organization determines its effectiveness in responding to attacks. Mission success is highly dependent on the availability and correct operation of complex computer networks, which are vulnerable to various types of attacks. Today, situational awareness capabilities are limited in many ways, such as inaccurate and incomplete vulnerability analysis, failure to adapt to evolving networks and attacks, inability to transform raw data into cyber intelligence, and inability for handling uncertainty. We describe advanced capabilities for mission-centric cyber situational awareness, based on defense in depth, provided by the Cauldron tool. Cauldron automatically maps all paths of vulnerability through networks, by correlating, aggregating, normalizing, and fusing data from a variety of sources. It provides sophisticated visualization of attack paths, with automatically generated mitigation recommendations. Flexible modeling supports multi-step analysis of firewall rules as well as host-to-host vulnerability, with attack vectors inside the network as well as from the outside. We describe alert correlation based on Caldron attack graphs, along with analysis of mission impact from attacks.