Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen W. Neville is active.

Publication


Featured researches published by Stephen W. Neville.


international conference on cloud computing | 2010

Dynamic Resource Allocation in Computing Clouds Using Distributed Multiple Criteria Decision Analysis

Yagiz Onat Yazir; Chris Matthews; Roozbeh Farahbod; Stephen W. Neville; Adel Guitouni; Sudhakar Ganti; Yvonne Coady

In computing clouds, it is desirable to avoid wasting resources as a result of under-utilization and to avoid lengthy response times as a result of over-utilization. In this paper, we propose a new approach for dynamic autonomous resource management in computing clouds. The main contribution of this work is two-fold. First, we adopt a distributed architecture where resource management is decomposed into independent tasks, each of which is performed by Autonomous Node Agents that are tightly coupled with the physical machines in a data center. Second, the Autonomous Node Agents carry out configurations in parallel through Multiple Criteria Decision Analysis using the PROMETHEE method. Simulation results show that the proposed approach is promising in terms of scalability, feasibility and flexibility.


international conference on malicious and unwanted software | 2008

Sybil attacks as a mitigation strategy against the Storm botnet

Carlton R. Davis; José M. Fernandez; Stephen W. Neville; John McHugh

The Storm botnet is one of the most sophisticated botnet active today, used for a variety of illicit activities. A key requirement for these activities is the ability by the botnet operators to transmit commands to the bots, or at least to the various segmented portions of the botnet. Disrupting these command and control (C&C) channels therefore becomes an attractive avenue to reducing botnets effectiveness and efficiency. Since the command and control infrastructure of Storm is based on peer-to-peer (P2P) networks, previous work has explored the use of index poisoning, a disruption method developed for file-sharing P2P networks, where the network is inundated with false information about the location of files. In contrast, in this paper we explore the feasibility of Sybil attacks as a mitigation strategy against Storm. The aim here is to infiltrate the botnet with large number of fake nodes (sybils), that seek to disrupt the communication between the bots by inserting themselves in the peer lists of ldquoregularrdquo bots, and eventually re-reroute or disrupt ldquorealrdquo C&C traffic. An important difference with index poisoning attacks is that sybil nodes must remain active and participate in the underlying P2P protocols, in order to remain in the peer list of regular bot nodes. However, they do not have to respond to the botmasterpsilas commands and participate into illicit activities. First, we outline a methodology for mounting practical Sybil attacks on the Storm botnet. Then, we describe our simulation studies, which provide some insights regarding the number of sybils necessary to achieve the desired level of disruption, with respect to the net growth rate of the botnet. We also explore how certain parameters such as the duration of the Sybil attack, and botnet design choices such as the size of a botpsilas peer list, affect the effectiveness of the attack.


european symposium on research in computer security | 2008

Structured Peer-to-Peer Overlay Networks: Ideal Botnets Command and Control Infrastructures?

Carlton R. Davis; Stephen W. Neville; José M. Fernandez; Jean-Marc Robert; John McHugh

Botnets, in particular the Storm botnet, have been garnering much attention as vehicles for Internet crime. Storm uses a modified version of Overnet, a structured peer-to-peer (P2P) overlay network protocol, to build its command and control (C&C) infrastructure. In this study, we use simulation to determine whether there are any significant advantages or disadvantages to employing structured P2P overlay networks for botnet C&C, in comparison to using unstructured P2P networks or other complex network models. First, we identify some key measures to assess the C&C performance of such infrastructures, and employ these measures to evaluate Overnet, Gnutella (a popular, unstructured P2P overlay network), the Erdős-Renyi random graph model and the Barabasi-Albert scale-free network model. Further, we consider the three following disinfection strategies: a) a randomstrategy that, with effort, can remove randomly selected bots and uses no knowledge of the C&C infrastructure, b) a tree-likestrategy where local information obtained from a disinfected bot (e.g. its peer list) is used to more precisely disinfect new machines, and c) a globalstrategy, where global information such as the degree of connectivity of bots within the C&C infrastructure, is used to target bots whose disinfection will have maximum impact. Our study reveals that while Overnet is less robust to random node failures or disinfections than the other infrastructures modelled, it outperforms them in terms of resilience against the targeted disinfection strategies introduced above. In that sense, Storm designers seem to have made a prudent choice! This work underlines the need to better understand how P2P networks are used, and can be used, within the botnet context, with this domain being quite distinct from their more commonplace usages.


international conference on cloud computing | 2012

Maitland: Lighter-Weight VM Introspection to Support Cyber-security in the Cloud

Chris Benninger; Stephen W. Neville; Yagiz Onat Yazir; Chris Matthews; Yvonne Coady

Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitlands performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.


conference on communication networks and services research | 2008

Secret Key Extraction in Ultra Wideband Channels for Unsynchronized Radios

Masoud Ghoreishi Madiseh; Michael McGuire; Stephen W. Neville; Ali Asghar Beheshti Shirazi

Secure communications in UWB based on cryptographic keys generated from channel measurements without a trusted third party have been developed. The fine time resolution of UWB allows high levels of mutual information to be obtained by a given A and B through independent characterizations of their shared communication channel. This mutual information determines the maximum secret key rate available to A and B. Since UWB channel gains change drastically with small antenna movements, it is inherently difficult for eavesdroppers to obtain channel measurements and reduce the secret key rate. In essence, UWB can provide spatial-temporal specific secret keys. Upper bounds on the secret key rate for standard UWB channels are calculated. It is demonstrated that high secure key generation rates are possible. It is shown that these key rates can be generated over a wide range of signal to noise ratios and channel synchronization errors.


international conference on communications | 2009

Verification of Secret Key Generation from UWB Channel Observations

M. Ghoreishi Madiseh; S. He; Michael McGuire; Stephen W. Neville; Xiaodai Dong

Theoretical models of ultrawideband (UWB) radio channels indicate that pairs of UWB radio transceivers measure their common radio channel with a high degree of agreement and third parties are not be able to accurately estimate the state of the common channels. These properties allow generation of secret keys to support secure communications from UWB channels measurements. In this paper, the results of UWB propagation studies are presented that validate the required properties to support secret key generation in a typical indoor environment. Key generation algorithms are employed on the measured data and key lengths on the order of thousands of bits are obtained capable of supporting most popular cryptographic systems. The paper also reports measurements of the spatial and temporal correlation of the UWB channel from which the relative privacy of the secret keys can be determined as well as the rate new secret keys may be generated.


IEEE Transactions on Information Forensics and Security | 2012

Applying Beamforming to Address Temporal Correlation in Wireless Channel Characterization-Based Secret Key Generation

Masoud Ghoreishi Madiseh; Stephen W. Neville; Michael McGuire

Wireless secret key generation (WKG) methods based on communications channel characterizations have gained significant interest as a mechanism for providing secure point-to-point communications. Ultra-wide band (UWB) WKG solutions provide an advantage over narrow- and wide-band approaches in that they can generate secure keys even when Alice and Bob, the two communicating parties, and all objects within the environment in which they exist, are stationary (i.e., in scenarios involving zero movement). This creates a secondary problem in that the high temporal correlations of such environments can lead to successive WKG processes producing identical or nearly identical secret keys, whereas the ideal is that all generated keys from successive WKG processes should be independent. This work shows that one method available to address this problem is to move to multiple-input multiple-output (MIMO)-based antenna systems and the use of random beamforming. The security of the resulting approach is assessed with respect to the worst case scenario whereby the eavesdropper, Eve, is assumed to possess perfect knowledge of Alice and Bobs MIMO channel and is able to estimate, with some degree of error, the coefficients of Alice and Bobs independent random beamformers.


advanced information networking and applications | 2011

STARS: A Framework for Statistically Rigorous Simulation-Based Network Research

Eamon Millman; Deepali Arora; Stephen W. Neville

Simulation has become one of the dominant tools in wired and wireless network research. With the advent of cloud, grid, and cluster computing it has become feasible to use parallelization to perform richer larger-scale simulations. Moreover, the computing resources needed to perform statistically rigorous simulations are now easily obtainable. Although a number of parallel network simulation frameworks exists, the issue of statistical rigorous testing has largely not been addressed. This work presents a parallel MPI-aware network simulation framework that is specifically designed to provide automated support for statistically rigorous experimentation, thereby offloading this significant researcher burden. Unlike prior frameworks, the proposed framework includes a distribution-free statistical analysis feedback loop that automatically deduces the next set of experiments that need to be run. The value of this new framework is highlighted by exploring the well known issue of assessing the true duration of start-up transients within mobile ad hoc networks (MANETs) simulations.


international conference on malicious and unwanted software | 2009

Optimising sybil attacks against P2P-based botnets

Carlton R. Davis; José M. Fernandez; Stephen W. Neville

Addressing and mitigating modern global-scale botnets is a pressing Internet security issue, particularly, given that these botnets are known to be provide attackers with the large-scale low-cost computing infrastructure required to engage in major spam campaigns, larger-scale phishing attacks, etc. Over time, botnets have evolved toward using decentralized peer-to-peer (P2P) command and control (C&C) infrastructures in order to increase their resilience against defender countermeasures, i.e. as seen in Storms use of Overnet and more recently in the appearance of HTTP-tunneled P2P botnets, such as Waledac and Conficker. The obvious question is, what are effective countermeasures against these modern botnets? This work focuses on evaluating, via simulation, sybil attack-based countermeasures and how such sybil-based strategies should be tailored to allow them to both be effective and implementable on global-scales. Slower-rate sybil infection strategies with random placement of sybils are shown to be nearly as effective as higher-rate infection strategies with targeted placement. This somewhat counter-intuitive result is important, as the former strategy is easier to implement by a loosely co-ordinated collective of globally scattered defenders.


advanced information networking and applications | 2012

Assessing the Expected Performance of the OLSR Routing Protocol for Denser Urban Core Ad Hoc Network Deployments

Deepali Arora; Eamon Millman; Stephen W. Neville

The wide-scale adoption of smart phones has begun to provide a pragmatic real-world deployment environments for mobile ad hoc networks, (i.e., as peer-to-peer game platforms, for emergency services, etc.). Such deployments are likely to occur with urban cores where device densities would easily exceed those that have traditionally been studied. Moreover, the quality of the resulting solutions will innately rest on the capabilities of the underlying routing protocols. Of current protocols, the OLSR proactive routing protocol makes the strongest arguments regarding its suitability to such larger, denser network environments. This work tests OLSRs true suitability by analyzing its performance within a 360-node network existing within a standard 1 km ×1.5 km communications area, (i.e., innately for a network with approximately 3 × the node densities typically studied). It is shown that OLSR largely fails for such denser networks, with these failure arising due to OLSRs underlying presumption that routing tables updates should occur relatively infrequently. This limitation within OLSR has not been previously reported and this work highlights the reasons why these issues were likely not observed within prior OLSR studies.

Collaboration


Dive into the Stephen W. Neville's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kin Fun Li

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge