Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gareth Tyson is active.

Publication


Featured researches published by Gareth Tyson.


international conference on computer communications and networks | 2012

A Trace-Driven Analysis of Caching in Content-Centric Networks

Gareth Tyson; Sebastian Kaune; Simon Miles; Yehia Elkhatib; Andreas Mauthe; Adel Taweel

A content-centric network is one which supports host-to-content routing, rather than the host-to-host routing of the existing Internet. This paper investigates the potential of caching data at the router-level in content-centric networks. To achieve this, two measurement sets are combined to gain an understanding of the potential caching benefits of deploying content-centric protocols over the current Internet topology. The first set of measurements is a study of the BitTorrent network, which provides detailed traces of content request patterns. This is then combined with CAIDAs ITDK Internet traces to replay the content requests over a real-world topology. Using this data, simulations are performed to measure how effective content-centric networking would have been if it were available to these consumers/providers. We find that larger cache sizes (10,000 packets) can create significant reductions in packet path lengths. On average, 2.02 hops are saved through caching (a 20% reduction), whilst also allowing 11% of data requests to be maintained within the requesters AS. Importantly, we also show that these benefits extend significantly beyond that of edge caching by allowing transit ASes to also reduce traffic.


international conference on network protocols | 2013

Optimal cache allocation for Content-Centric Networking

Yonggong Wang; Zhenyu Li; Gareth Tyson; Steve Uhlig; Gaogang Xie

Content-Centric Networking (CCN) is a promising framework for evolving the current network architecture, advocating ubiquitous in-network caching to enhance content delivery. Consequently, in CCN, each router has storage space to cache frequently requested content. In this work, we focus on the cache allocation problem: namely, how to distribute the cache capacity across routers under a constrained total storage budget for the network. We formulate this problem as a content placement problem and obtain the exact optimal solution by a two-step method. Through simulations, we use this algorithm to investigate the factors that affect the optimal cache allocation in CCN, such as the network topology and the popularity of content. We find that a highly heterogeneous topology tends to put most of the capacity over a few central nodes. On the other hand, heterogeneous content popularity has the opposite effect, by spreading capacity across far more nodes. Using our findings, we make observations on how network operators could best deploy CCN caches capacity.


parallel, distributed and network-based processing | 2009

Modelling the Internet Delay Space Based on Geographical Locations

Sebastian Kaune; Konstantin Pussep; Christof Leng; Aleksandra Kovacevic; Gareth Tyson; Ralf Steinmetz

Existing approaches for modelling the Internet delay space predict end-to-end delays between two arbitrary hosts as static values. Further, they do not capture the characteristics caused by geographical constraints. Peer-to-peer (P2P) systems are, however, often very sensitive to the underlying delay characteristics of the Internet, since these characteristics directly influence system performance. This work proposes a model to predict lifelike delays between a given pair of end hosts. In addition to its low delay computation time, it has only linear memory costs which allows large scale P2P simulations to be performed. The model includes realistic delay jitter, subject to the geographical position of the sender and the receiver. Our analysis, using existing Internet measurement studies reveals that our approach seems to be an optimal tradeoff between a number of conflicting properties of existing approaches.


international conference on peer-to-peer computing | 2010

Unraveling BitTorrent's File Unavailability: Measurements and Analysis

Sebastian Kaune; Gareth Tyson; Andreas Mauthe; Carmen Guerrero; Ralf Steinmetz

BitTorrent suffers from one fundamental problem: the long-term availability of content. This occurs on a massive-scale with 38% of torrents becoming unavailable within the first month. In this paper we explore this problem by performing two large-scale measurement studies including 46K torrents and 29M users. The studies go significantly beyond any previous work by combining per-node, per-torrent and system-wide observations to ascertain the causes, characteristics and repercussions of file unavailability. The study confirms the conclusion from previous works that seeders have a significant impact on both performance and availability. However, we also present some crucial new findings: (i) the presence of seeders is not the sole factor involved in file availability, (ii) 23.5% of nodes that operate in seedless torrents can finish their downloads, and (iii) BitTorrent availability is discontinuous, operating in cycles of temporary unavailability.


international conference on computer communications | 2013

Towards an information-centric delay-tolerant network

Gareth Tyson; John Bigham; Eliane L. Bodanese

Information-centric networks (ICNs) have emerged as a prominent future Internet architecture. They work on the principle that, in todays society, a network should be optimised towards the delivery of information, rather than the transit of messages between pre-determined end hosts. At the same time, another prominent research direction has been the delay-tolerant networking initiative, which argues that networks must explicitly support the tolerance of high delays and disruptions in end-to-end paths. We believe that this observation is pervasive across a number of applications and network technologies and should therefore be strongly considered in any future ICN designs. This paper explores the potential of combining these concepts into an information-centric delay-tolerant network (ICDTN). Through a qualitative and quantitative investigation, we present arguments in favour of the design, as well as important research challenges.


IEEE Transactions on Computers | 2016

Design and Evaluation of the Optimal Cache Allocation for Content-Centric Networking

Yonggong Wang; Zhenyu Li; Gareth Tyson; Steve Uhlig; Gaogang Xie

Content-centric networking (CCN) is a promising framework to rebuild the Internets forwarding substrate around the concept of content. CCN advocates ubiquitous in-network caching to enhance content delivery, and thus each router has storage space to cache frequently requested content. In this work, we focus on the cache allocation problem, namely, how to distribute the cache capacity across routers under a constrained total storage budget for the network. We first formulate this problem as a content placement problem and obtain the optimal solution by a two-step method. We then propose a suboptimal heuristic method based on node centrality, which is more practical in dynamic networks with frequent content publishing. We investigate through simulations the factors that affect the optimal cache allocation, and perhaps more importantly we use a real-life Internet topology and video access logs from a large scale Internet video provider to evaluate the performance of various cache allocation methods. We observe that network topology and content popularity are two important factors that affect where exactly should cache capacity be placed. Further, the heuristic method comes with only a very limited performance penalty compared to the optimal allocation. Finally, using our findings, we provide recommendations for network operators on the best deployment of CCN caches capacity over routers.


Networking Conference, 2014 IFIP | 2014

Can SPDY really make the web faster

Yehia Elkhatib; Gareth Tyson; Michael Welzl

HTTP is a successful Internet technology on top of which a lot of the web resides. However, limitations with its current specification have encouraged some to look for the next generation of HTTP. In SPDY, Google has come up with such a proposal that has growing community acceptance, especially after being adopted by the IETF HTTPbis-WG as the basis for HTTP/2.0. SPDY has the potential to greatly improve web experience with little deployment overhead, but we still lack an understanding of its true potential in different environments. This paper offers a comprehensive evaluation of SPDYs performance using extensive experiments. We identify the impact of network characteristics and website infrastructure on SPDYs potential page loading benefits, finding that these factors are decisive for an optimal SPDY deployment strategy. Through exploring such key aspects that affect SPDY, and accordingly HTTP/2.0, we feed into the wider debate regarding the impact of future protocols.


conference on multimedia computing and networking | 2009

Visibility of individual packet loss on H.264 encoded video stream: a user study on the impact of packet loss on perceived video quality

Mu Mu; Roswitha Gostner; Andreas Mauthe; Gareth Tyson; Francisco Garcia

Assessing video content transmitted over networked content infrastructures becomes a fundamental requirement for service providers. Previous research has shown that there is no direct correlation between traditional network QoS and user perceived video quality. This paper presents a study investigating the impact of individual packet loss on four types of H.264 main-profile encoded video streams. Four artifact factors to model the degree of artifacts in video frames are defined. Further, the visibility of artifacts considering the video content characteristics, encoding scheme and error concealment is investigated in conjunction with a user study. The individual and joint impacts of artifact factors are explored on the test video sequences. From the results of user tests, the artifact factor-based assessment method shows superiority over PSNR-based and network QoS based quality assessment.


Proceedings of the 2015 Workshop on Do-it-yourself Networking: an Interdisciplinary Approach | 2015

SCANDEX: Service Centric Networking for Challenged Decentralised Networks

Arjuna Sathiaseelan; Liang Wang; Andrius Aucinas; Gareth Tyson; Jonathon Andrew Crowcroft

Do-It-Yourself (DIY) networks are decentralised networks built by an (often) amateur community. As DIY networks do not rely on the need for backhaul Internet connectivity, these networks are mostly a mix of both offline and online networks. Although DIY networks have their own homegrown services, the current Internet-based cloud services are often useful, and access to some services could be beneficial to the community. Considering that most DIY networks have challenged Internet connectivity, migrating current service virtualisation instances could face great challenges. Service Centric Networking (SCN) has been recently proposed as a potential solution to managing services more efficiently using Information Centric Networking (ICN) principles. In this position paper, we present our arguments for the need for a resilient SCN architecture, propose a strawman SCN architecture that combines multiple transmission technologies for providing resilient SCN in challenged DIY networks and, finally, identify key challenges that need to be explored further to realise the full potential of our architecture.


ACM Transactions on Internet Technology | 2012

Juno: A Middleware Platform for Supporting Delivery-Centric Applications

Gareth Tyson; Andreas Mauthe; Sebastian Kaune; Paul Grace; Adel Taweel; Thomas Plagemann

This article proposes a new delivery-centric abstraction which extends the existing content-centric networking API. A delivery-centric abstraction allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements regarding such things as performance, security, and resource consumption. Fulfilling these requirements, however, is complex as often the ability of a provider to satisfy requirements will vary between different consumers and over time. Therefore, we argue that it is vital to manage this variance to ensure an application fulfils its needs. To this end, we present the Juno middleware, which implements delivery-centric support using a reconfigurable software architecture to: (i) discover multiple sources of an item of content; (ii) model each source’s ability to provide the content; then (iii) adapt to interact with the source(s) that can best fulfil the application’s requirements. Juno therefore utilizes existing providers in a backwards compatible way, supporting immediate deployment. This article evaluates Juno using Emulab to validate its ability to adapt to its environment.

Collaboration


Dive into the Gareth Tyson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Uhlig

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Kaune

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liang Wang

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge