Thomas E. Daniels
Iowa State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas E. Daniels.
ACM Transactions on Information and System Security | 2008
Wei Wang; Thomas E. Daniels
In this article we develop a novel graph-based approach toward network forensics analysis. Central to our approach is the evidence graph model that facilitates evidence presentation and automated reasoning. Based on the evidence graph, we propose a hierarchical reasoning framework that consists of two levels. Local reasoning aims to infer the functional states of network entities from local observations. Global reasoning aims to identify important entities from the graph structure and extract groups of densely correlated participants in the attack scenario. This article also presents a framework for interactive hypothesis testing, which helps to identify the attackers nonexplicit attack activities from secondary evidence. We developed a prototype system that implements the techniques discussed. Experimental results on various attack datasets demonstrate that our analysis mechanism achieves good coverage and accuracy in attack group and scenario extraction with less dependence on hard-coded expert knowledge.
international conference on distributed computing systems workshops | 2005
Yongping Tang; Thomas E. Daniels
Networks have become omnipresent in todays world and part of the basic infrastructure. The safety problem is important and urgent for all the network users. But the current situation in this field is very severe - not only is it difficult to block network criminals but also in many cases unable to find them out. There is a growing need for systems that allow not only the detection of complex attacks, but after the fact understanding of what happened. This could be used in a forensic sense or simply as a managerial tool to recover and repair damaged systems. There are few network systems that support forensic evidence collection and the current systems also lack effective attack attribution. In this paper, we will provide a network forensics framework based on the distributed techniques thereby providing an integrated platform for automatic forensic evidence collection and efficient data storage, supporting easy integration of known attribution methods, effective cooperation and an attack attribution graph generation mechanism to illustrate hacking procedures.
annual computer security applications conference | 2005
Wei Wang; Thomas E. Daniels
In this paper, we present techniques for a network forensics analysis mechanism that includes effective evidence presentation, manipulation and automated reasoning. We propose the evidence graph as a novel graph model to facilitate the presentation and manipulation of intrusion evidence. For automated evidence analysis, we develop a hierarchical reasoning framework that includes local reasoning and global reasoning. Local reasoning aims to infer the roles of suspicious hosts from local observations. Global reasoning aims to identify group of strongly correlated hosts in the attack and derive their relationships. By using the evidence graph model, we effectively integrate analyst feedback into the automated reasoning process. Experimental results demonstrate the potential and effectiveness of our proposed approaches
international conference on computer communications and networks | 2004
Basheer Al-Duwairi; Thomas E. Daniels
Recently, several schemes have been proposed for IP traffic source identification for tracing attacks that employ source address spoofing such as denial of service (DoS) attacks. Most of these schemes are based on packet marking (i.e., augmenting IP packets with partial path information). A major challenge to packet marking schemes is the limited space available in the IP header for marking purposes. In this paper, we focus on this issue and propose a topology based encoding schemes supported by real Internet measurements. In particular, we propose an idealized deterministic edge append scheme in which we assume that the IP header can be modified to include the marking option field of fixed size. Also, we propose a deterministic pipelined packet marking scheme that is backward compatible with IPv4 (i.e., no IP header modification). The validity of both schemes depends directly on the statistical information that we extract from large data sets that represent Internet maps. Our studies show that it is possible to encode an entire path using 52 bits
technical symposium on computer science education | 2009
Benjamin Anderson; Amy Joines; Thomas E. Daniels
The Xen Worlds project at Iowa State University was designed to provide a virtualized lab environment for the Information Assurance program. The large number of off-campus students, and a desire for high levels of security, drove many of the requirements for the Xen Worlds environment. Some of the requirements established for the project were: The environment needed to be equally accessible and easy to use for both on- and off-campus students. It needed to be isolated from the outside world and other students. The system had to be equally usable for students with limited computing and network resources. Costs had to be kept to a minimum. The Xen Worlds environment has now been used to support several courses at both the undergraduate and graduate level. This virtual environment was equally accessible to on- and off-campus students on a 24/7 basis and supported numerous assignments in support of established curriculum requirements. Finally, surveys of students who used the Xen Worlds environment show that students have a favorable view of the project and view it as a useful and convenient learning tool.
IEEE Transactions on Information Forensics and Security | 2012
Ryan M. Gerdes; Mani Mina; Steve F. Russell; Thomas E. Daniels
This work sets forth a systematic approach for the investigation and utilization of the signal characteristics of digital devices for use in a security context. A methodology, built upon an optimal detector, the matched filter, is proposed that allows for the reliable identification and tracking of wired Ethernet cards by use of their hardware signaling characteristics. The matched filter is found to be sensitive enough to differentiate between devices using only a single Ethernet frame; an adaptive thresholding strategy employing prediction intervals is used to cope with the stochastic nature of the signals. To demonstrate the validity of the methodology, and to determine which portions of the signal are useful for identification purposes, experiments were performed on three different models of 10/100 Ethernet cards, totaling 27 devices in all. In selecting the cards, an effort was made to maximize intramodel similarity and thus present a worst-case scenario. While the primary focus of the work is network-based authentication, forensic applications are also considered. By using data collected from the same devices at different times, it is shown that some models of cards can be reidentified even after a month has elapsed since they were last seen.
communications and mobile computing | 2009
Su Chang; Linfeng Zhang; Yong Guan; Thomas E. Daniels
Botnets are the most serious danger facing the Internet and enterprise networks. To effectively protect against botnets, researchers should not only focus on known botnets, but also the inherent relationships among them and those botnets to appear in the future. In this paper, we first propose a framework capable of characterizing the inherent relationships between all different kinds of current (existing and suggested in the literature) botnets as well as worms. Based on the proposed framework, we predict a new botnet that we call the Loosely Coupled Peer-to-Peer (P2P) Botnet (lcbot), which is stealthy and can be considered as a combination of existing P2P botnet structures. We conduct experiments to compare the performances between lcbot and other P2P botnets in the literature, and gain insight understanding of P2P botnets. We also discuss potential mechanisms to detect the existence of P2P botnets. To the best of our knowledge, we are the first to propose the framework for botnets, the lcbot concept in P2P botnet research.
new security paradigms workshop | 2006
Wei Wang; Thomas E. Daniels
In this paper we propose the new paradigm of applying diffusion and graph spectral methods for network forensic analysis. Based on an evidence graph model built from collected evidence, graph spectral methods show potential in identifying key components and patterns of attack by extracting important graph structures. We also present the novel view that the propagation of suspicion in an attack scene could be modelled in analogy with heat diffusion in physics systems. In this paradigm, the evidence graph becomes the basis for a physical construct, which derives its properties such as conductivity and heat generation from evidence features. We argue that diffusion and graph spectral methods not only provide a mathematically well grounded approach to network forensic analysis, but also open up the opportunity for applying structured parameter refinement and high performance computation methods to forensic analysis field.
international conference on emerging security information, systems and technologies | 2009
Su Chang; Thomas E. Daniels
Node behavior profiling is a promising tool for many aspects in network security. In our research, our goal is to couple node behavior profiles with statistical tests with a focus on enterprise security. Limited work has been done in the literature. In this paper, we first propose a correlation based node behavior profiling approach to study node behaviors in enterprise network environments. We then propose formal statistical test on the most common behavior profiles which is able to detect worm propagation. In our initial studies, we evaluate our profiling and detection schemes using real enterprise data (LBNL traces). The results show that the correlation based node behavior profiling approach can capture normal behaviors of different types. Consequently, the behavior profiles are promising for anomaly detection when coupled with statistical methods.
computer and communications security | 2008
Shantanu Gattani; Thomas E. Daniels
Network security research can benefit greatly from testing environments that are capable of generating realistic, repeatable and configurable background traffic. In order to conduct network security experiments, researchers require isolated testbeds capable of recreating actual network environments, complete with infrastructure and traffic details. Unfortunately, due to privacy and flexibility concerns, actual network traffic is rarely shared by organizations. Trace data anonymization is one solution to this problem. The research community has responded to this sanitization problem with anonymization tools that aim to remove sensitive information from network traces, and attacks on anonymized traces that aim to evaluate the efficacy of the anonymization schemes. However there is continued lack of a comprehensive model that distills all elements of the sanitization problem into a functional reference model. In this paper we offer such a comprehensive functional reference model that identifies and binds together all the entities required to formulate the problem of network data anonymization. We also build a new information flow model that illustrates the overly optimistic nature of inference attacks on anonymized traces. We also provide a probabilistic interpretation of the information model and develop a privacy metric for anonymized traces.