Steven Hand
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven Hand.
symposium on operating systems principles | 2003
Paul Barham; Boris Dragovic; Keir Fraser; Steven Hand; Tim Harris; Alex Ho; Rolf Neugebauer; Ian Pratt; Andrew Warfield
Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests.
european conference on computer systems | 2006
Alex Ho; Michael A. Fetterman; Christopher Clark; Andrew Warfield; Steven Hand
Many software attacks are based on injecting malicious code into a target host. This paper demonstrates the use of a well-known technique, data tainting, to track data received from the network as it propagates through a system and to prevent its execution. Unlike past approaches to taint tracking, which track tainted data by running the system completely in an emulator or simulator, resulting in considerable execution overhead, our work demonstrates the ability to dynamically switch a running system between virtualized and emulated execution. Using this technique, we are able to explore hardware support for taint-based protection that is deployable in real-world situations, as emulation is only used when tainted data is being processed by the CPU. By modifying the CPU, memory, and I/O devices to support taint tracking and protection, we guarantee that data received from the network may not be executed, even if it is written to, and later read from disk. We demonstrate near native speeds for workloads where little taint data is present.
architectural support for programming languages and operating systems | 2013
Anil Madhavapeddy; Richard Mortier; Charalampos Rotsos; David J. Scott; Balraj Singh; Thomas Gazagnaire; Steven Smith; Steven Hand; Jon Crowcroft
We present unikernels, a new approach to deploying cloud services via applications written in high-level source code. Unikernels are single-purpose appliances that are compile-time specialised into standalone kernels, and sealed against modification when deployed to a cloud platform. In return they offer significant reduction in image sizes, improved efficiency and security, and should reduce operational costs. Our Mirage prototype compiles OCaml code into unikernels that run on commodity clouds and offer an order of magnitude reduction in code size without significant performance penalty. The architecture combines static type-safety with a single address-space layout that can be made immutable via a hypervisor extension. Mirage contributes a suite of type-safe protocol libraries, and our results demonstrate that the hypervisor is a platform that overcomes the hardware compatibility issues that have made past library operating systems impractical to deploy in the real-world.
international conference on autonomic computing | 2009
Evangelia Kalyvianaki; Themistoklis Charalambous; Steven Hand
Data center virtualization allows cost-effective server consolidation which can increase system throughput and reduce power consumption. Resource management of virtualized servers is an important and challenging task, especially when dealing with fluctuating workloads and complex multi-tier server applications. Recent results in control theory-based resource management have shown the potential benefits of adjusting allocations to match changing workloads. This paper presents a new resource management scheme that integrates the Kalman filter into feedback controllers to dynamically allocate CPU resources to virtual machines hosting server applications. The novelty of our approach is the use of the Kalman filter-the optimal filtering technique for state estimation in the sum of squares sense-to track the CPU utilizations and update the allocations accordingly. Our basic controllers continuously detect and self-adapt to unforeseen workload intensity changes. Our more advanced controller self-configures itself to any workload condition without any a priori information. Indicatively, it results in within 4.8% of the performance of workload-aware controllers under high intensity workload changes, and performs equally well under medium intensity traffic. In addition, our controllers are enhanced to deal with multi-tier server applications: by using the pair-wise resource coupling between application components, they provide a 3% on average server performance improvement when facing large unexpected workload increases when compared to controllers with no such resource-coupling mechanism. We evaluate our techniques by controlling a 3-tier Rubis benchmark web site deployed on a prototype Xen-virtualized cluster.
acm special interest group on data communication | 2003
Jon Crowcroft; Steven Hand; Richard Mortier; Timothy Roscoe; Andrew Warfield
It is widely accepted that the current Internet architecture is insufficient for the future: problems such as address space scarcity, mobility and non-universal connectivity are already with us, and stand to be exacerbated by the explosion of wireless, ad-hoc and sensor networks. Furthermore, it is far from clear that the ubiquitous use of standard transport and name resolution protocols will remain practicable or even desirable.In this paper we propose Plutarch, a new inter-networking architecture. It subsumes existing architectures such as that determined by the Internet Protocol suite, but makes explicit the heterogeneity that contemporary inter-networking schemes attempt to mask. To handle this heterogeneity, we introduce the notions of context and interstitial function, and describe a supporting architecture. We discuss the benefits, present some potential scenarios, and consider the research challenges posed.
international workshop on peer to peer systems | 2002
Steven Hand; Timothy Roscoe
We present the design of Mnemosyne, a peer-to-peer steganographic storage service. Mnemosyne provides a high level of privacy and plausible deniability by using a large amount of shared distributed storage to hide data. Blocks are dispersed by secure hashing, and loss codes used for resiliency. We discuss the design of the system, and the challenges posed by traffic analysis.
acm special interest group on data communication | 2003
Jon Crowcroft; Steven Hand; Richard Mortier; Timothy Roscoe; Andrew Warfield
Quality of Service (QoS) has been touted as a technological requirement for many different networks at many different times. However, very few (if any) schemes for providing it have ever been successful, despite a huge amount of research in the area of QoS provision. In this position paper we analyze some of the reasons why so many QoS mechanisms have failed to be widely deployed. We suggest two factors in this failure: the timeliness of QoS mechanisms (they rarely arrive when they are needed), and the inherent contradiction of layering QoS mechanisms over a best-effort network. We also give some thoughts on how future QoS research might increase its chances of successful deployment by better positioning itself relative to other developments in networking.
2003 IEEE Conference onOpen Architectures and Network Programming. | 2003
Steven Hand; Tim Harris; Evangelos Kotsovinos; Ian Pratt
This paper presents the design of the XenoServer Open Platform: a public infrastructure for wide-area computing, capable of hosting tasks that span the full spectrum of distributed programming. The platform integrates resource management, charging and auditing. We emphasize the control-plane aspects of the system, showing how it supports service deployment with a low cost of entry and how it forms a substrate over which other distributed computing platforms can be deployed.
european conference on computer systems | 2009
Amitabha Roy; Steven Hand; Tim Harris
The advent of multi-core processors means that exploiting parallelism is key to increasing the performance of programs. Many researchers have studied the use of atomic blocks as a way to simplify the construction of scalable parallel programs. However, there is a large body of existing lock-based code, and typically it is incorrect to simply replace lock-based critical sections with atomic blocks. Some problems include the need to do IO within critical sections; the use of primitives such as condition variables; and the sometime reliance on underlying lock properties such as fairness or priority inheritance. In this paper we investigate an alternative: a software runtime system that allows threads to speculatively execute lock-based critical sections in parallel. Execution proceeds optimistically, dynamically detecting conflicts between accesses by concurrent threads. However, if there are frequent conflicts, or if there are attempts to perform operations that cannot be done speculatively, then execution can fall back to acquiring a lock. Conversely, implementations of atomic blocks must typically serialise all operations that cannot be performed speculatively. Our runtime system has been designed with the requirements of systems code in mind: in particular it does not require that programs be written in type-safe languages, nor does it require any form of garbage collection. Furthermore, we never require a thread holding a lock to wait for a thread that has speculatively acquired it. This lets us retain any useful underlying properties of a given lock implementation, e.g. fairness or priority inheritance.
database and expert systems applications | 2003
Boris Dragovic; Evangelos Kotsovinos; Steven Hand; Peter R. Pietzuch
This paper describes XenoTrust, the trust management architecture used in the XenoServer Open Platform: a public infrastructure for wide-area computing, capable of hosting tasks that span the full spectrum of distributed paradigms. We suggest that using an event-based publish /subscribe methodology for the storage, retrieval and aggregation of reputation information can help exploiting asynchrony and simplicity, as well as improving scalability.