Richard G. Clegg
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard G. Clegg.
international ifip tc networking conference | 2011
Ioannis Psaras; Richard G. Clegg; Raul Landa; Wei Koong Chai; George Pavlou
Networking Named Content (NNC) was recently proposed as a new networking paradigm to realise Content Centric Networks (CCNs). The new paradigm changes much about the current Internet, from security and content naming and resolution, to caching at routers, and new flow models. In this paper, we study the caching part of the proposed networking paradigm in isolation from the rest of the suggested features. In CCNs, every router caches packets of content and reuses those that are still in the cache, when subsequently requested. It is this caching feature of CCNs that we model and evaluate in this paper. Our modelling proceeds both analytically and by simulation. Initially, we develop a mathematical model for a single router, based on continuous time Markov-chains, which assesses the proportion of time a given piece of content is cached. This model is extended to multiple routers with some simple approximations. The mathematical model is complemented by simulations which look at the caching dynamics, at the packet-level, in isolation from the rest of the flow.
international conference on computer communications | 2009
Raul Landa; David Griffin; Richard G. Clegg; Eleni Mykoniati; Miguel Rio
Although direct reciprocity (Tit-for-Tat )c ontribu- tion systems have been successful in reducing freeloading in peer- to-peer overlays, it has been shown that, unless the contribution network is dense, they tend to be slow (or may even fail) to converge (1). On the other hand, current indirect reciprocity mechanisms based on reputation systems tend to be susceptible to sybil attacks, peer slander and whitewashing. In this paper we present PledgeRoute, an accounting mech- anism for peer contributions that is based on social capital. This mechanism allows peers to contribute resources to one set of peers and use this contribution to obtain services from a different set of peers, at a different time. PledgeRoute is completely decentralised, can be implemented in both structured and unstructured peer-to-peer systems, and it is resistant to the three kinds of attacks mentioned above. To achieve this, we model contribution transitivity as a routing problem in the contribution network of the peer-to-peer overlay, and we present arguments for the routing behaviour and the sybilproofness of our contribution transfer procedures on this basis. Additionally, we present mechanisms for the seeding of the contribution network, and a combination of incentive mechanisms and reciprocation policies that motivate peers to adhere to the protocol and maximise their service contributions to the overlay.
Computer Communications | 2010
Richard G. Clegg; Carla Di Cairano-Gilfedder; Shi Zhou
This paper takes a critical look at the usefulness of power law models of the Internet. The twin focuses of the paper are Internet traffic and topology generation. The aim of the paper is twofold. Firstly it summarises the state of the art in power law modelling particularly giving attention to existing open research questions. Secondly it provides insight into the failings of such models and where progress needs to be made for power law research to feed through to actual improvements in network performance.
IEEE Transactions on Computers | 2013
Richard G. Clegg; Stuart Clayman; George Pavlou; Lefteris Mamatas; Alex Galis
This paper addresses the problem of provisioning management/monitoring nodes within highly dynamic network environments, particularly virtual networks. In a network, where nodes and links may be spontaneously created and destroyed (perhaps rapidly) there is a need for stable and responsive management and monitoring, which does not create a large load (in terms of traffic or processing) for the system. A subset of nodes has to be chosen for management/monitoring, each of which will manage a subset of the nodes in the network. A new, simple, and locally optimal greedy algorithm called Pressure is provided for choice of node position to minimize traffic. This algorithm is combined with a system for predicting the lifespan of nodes, and a tunable parameter is also given so that a system operator could express a preference for elected nodes to be chosen to reduce traffic, to be stable,” or some compromise between these positions. The combined algorithm called PressureTime is lightweight and could be run in a distributed manner. The resulting algorithms are tested both in simulation and in a testbed environment of virtual routers. They perform well, both at reducing traffic and at choosing long lifespan nodes.
Iet Communications | 2012
Richard G. Clegg; Safa Isam; Ioannis Kanaras; Izzat Darwazeh
Spectral efficiency is a key design issue for all wireless communication systems. Orthogonal frequency division multiplexing (OFDM) is a very well-known technique for efficient data transmission over many carriers overlapped in frequency. Recently, several studies have appeared that describe spectrally efficient variations of multi-carrier systems where the condition of orthogonality is dropped. Proposed techniques suffer from two weaknesses: firstly, the complexity of generating the signal is increased. Secondly, the signal detection is computationally demanding. Known methods suffer either unusably high complexity or high error rates because of the inter-carrier interference. This study addresses both problems by proposing new transmitter and receiver architectures whose design is based on using the simplification that a rational spectrally efficient frequency division multiplexing (SEFDM) system can be treated as a set of overlapped and interleaving OFDM systems. The efficacy of the proposed designs is shown through detailed simulation of systems with different signal types and carrier dimensions. The decoder is heuristic but in practice produces very good results that are close to the theoretical best performance in a variety of settings. The system is able to produce efficiency gains of up to 20% with negligible impact on the required signal-to-noise ratio.
IEEE Communications Magazine | 2008
Eleni Mykoniati; Raul Landa; Spiros Spirou; Richard G. Clegg; Lawrence Latif; David Griffin; Miguel Rio
We present a system for streaming live entertainment content over the Internet originating from a single source to a scalable number of consumers without resorting to centralized or provider-provisioned resources. The system creates a peer-to-peer overlay network, which attempts to optimize use of existing capacity to ensure quality of service, delivering low startup delay and lag in playout of the live content. There are three main aspects of our solution: first, a swarming mechanism that constructs an overlay topology for minimizing propagation delays from the source to end consumers; second, a distributed overlay anycast system that uses a location-based search algorithm for peers to quickly find the closest peers in a given stream; and finally, a novel incentive mechanism that encourages peers to donate capacity even when the user is not actively consuming content.
network operations and management symposium | 2014
João Taveira Araújo; Raul Landa; Richard G. Clegg; George Pavlou
Existing methods for traffic resilience at the network and transport layers typically work in isolation, often resorting to inference in fault detection and recovery respectively. This both duplicates functionality across layers, eroding efficiency, and leads to protracted recovery cycles, affecting responsiveness. Such misalignment is particularly at odds with the unprecedented concentration of traffic in data-centers, in which network and hosts are managed in unison. This paper advocates instead a cross-layer approach to traffic resilience. The proposed architecture, INFLEX, builds on the abstractions provided by software-defined networking (SDN) to maintain multiple virtual forwarding planes which the network assigns to flows. In case of path failure, transport protocols pro-actively request to switch plane in a manner which is unilaterally deployable by an edge domain, providing scalable end-to-end forwarding path resilience.
international conference on computer communications and networks | 2013
Raul Landa; Richard G. Clegg; João Taveira Araújo; Eleni Mykoniati; David Griffin; Miguel Rio
When designing distributed systems and Internet protocols, designers can benefit from statistical models of the Internet that can be used to estimate their performance. However, it is frequently impossible for these models to include every property of interest. In these cases, model builders have to select a reduced subset of network properties, and the rest will have to be estimated from those available. In this paper we present a technique for the analysis of Internet round trip times (RTT) and its relationship with other geographic and network properties. This technique is applied on a novel dataset comprising ~19 million RTT measurements derived from ~200 million RTT samples between ~54 thousand DNS servers. Our main contribution is an information-theoretical analysis that allows us to determine the amount of information that a given subset of geographic or network variables (such as RTT or great circle distance between geolocated hosts) gives about other variables of interest. We then provide bounds on the error that can be expected when using statistical estimators for the variables of interest based on subsets of other variables.
local computer networks | 2011
Fabio Victora Hecht; Thomas Bocek; Richard G. Clegg; Raul Landa; David Hausheer; Burkhard Stiller
The popularity of video sharing over the Internet has increased significantly. High traffic generated by such applications at the source can be better distributed using a peer-to-peer (P2P) overlay. Unlike most P2P systems, LiveShift combines both live and on-demand video streaming — while video is transmitted through the peer-to-peer network in a live fashion, all peers participate in distributed storage. This adds the ability to replay time-shifted streams from other peers in a distributed and scalable manner. This paper describes an adaptive fully-distributed mesh-pull protocol that supports the envisioned use case and a set of policies that enable efficient usage of resources, discussing interesting trade-offs encountered. User-focused evaluation results, including both channel switching and time shifting behavior, show that the proposed system provides good quality of experience for most users, in terms of infrequent stalling, low playback lag, and a small proportion of skipped blocks in all the scenarios studied, even in presence of churn.
international conference on computer communications | 2009
Eleni Mykoniati; Laurence Latif; Raul Landa; Ben Yang; Richard G. Clegg; David Griffin; Miguel Rio
In this paper we present the Distributed Overlay Anycast Table, a structured overlay that implements application-layer anycast, allowing the discovery of the closest host that is a member of a given group. One application is in locality-aware peer-to-peer networks, where peers need to discover low-latency peers participating in the distribution of a particular file or stream. The DOAT makes use of network delay coordinates and a space filling curve to achieve locality-aware routing across the overlay, and Bloom filters to aggregate group identifiers. The solution is designed to optimise both accuracy and query time, which are essential for real-time applications. We simulated DOAT using both random and realistic node distributions. The results show that accuracy is high and query time is low.