Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where João Taveira Araújo is active.

Publication


Featured researches published by João Taveira Araújo.


acm special interest group on data communication | 2014

Rekindling network protocol innovation with user-level stacks

Michio Honda; Felipe Huici; Costin Raiciu; João Taveira Araújo; Luigi Rizzo

Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to nbecome widely used) should be placed on end systems. In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user-level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s. We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18-90% both the same server and nginx running over the kernels stack.


IEEE Transactions on Mobile Computing | 2011

Explicit Congestion Control Algorithms for Time Varying Capacity Media

Filipe Abrantes; João Taveira Araújo; Manuel Ricardo

Explicit congestion control (XCC) is emerging as one potential solution for overcoming limitations inherent to the current TCP algorithm, characterized by unstable throughput, high queuing delay, RTT-limited fairness, and a static dynamic range that does not scale well to high bandwidth delay product networks. In XCC, routers provide multibit feedback to sources, which, in turn, adapt throughput more accurately to the path bandwidth with potentially faster convergence times. Such systems, however, require precise knowledge of link capacity for efficient operation. In the presence of variable-capacity media, e.g., 802.11, such information is not entirely obvious or may be difficult to extract. We explore three possible algorithms for XCC which retain efficiency under such conditions by inferring available bandwidth from queue dynamics and test them through simulations with two relevant XCC protocols: XCP and RCP. Additionally, preliminary results from an experimental implementation based on XCP are presented. Finally, we compare our proposals with TCP and show how such algorithms outperform it in terms of efficiency, stability, queuing delay, and flow-rate fairness.


network operations and management symposium | 2014

Software-defined network support for transport resilience

João Taveira Araújo; Raul Landa; Richard G. Clegg; George Pavlou

Existing methods for traffic resilience at the network and transport layers typically work in isolation, often resorting to inference in fault detection and recovery respectively. This both duplicates functionality across layers, eroding efficiency, and leads to protracted recovery cycles, affecting responsiveness. Such misalignment is particularly at odds with the unprecedented concentration of traffic in data-centers, in which network and hosts are managed in unison. This paper advocates instead a cross-layer approach to traffic resilience. The proposed architecture, INFLEX, builds on the abstractions provided by software-defined networking (SDN) to maintain multiple virtual forwarding planes which the network assigns to flows. In case of path failure, transport protocols pro-actively request to switch plane in a manner which is unilaterally deployable by an edge domain, providing scalable end-to-end forwarding path resilience.


international conference on computer communications and networks | 2013

Measuring the Relationships between Internet Geography and RTT

Raul Landa; Richard G. Clegg; João Taveira Araújo; Eleni Mykoniati; David Griffin; Miguel Rio

When designing distributed systems and Internet protocols, designers can benefit from statistical models of the Internet that can be used to estimate their performance. However, it is frequently impossible for these models to include every property of interest. In these cases, model builders have to select a reduced subset of network properties, and the rest will have to be estimated from those available. In this paper we present a technique for the analysis of Internet round trip times (RTT) and its relationship with other geographic and network properties. This technique is applied on a novel dataset comprising ~19 million RTT measurements derived from ~200 million RTT samples between ~54 thousand DNS servers. Our main contribution is an information-theoretical analysis that allows us to determine the amount of information that a given subset of geographic or network variables (such as RTT or great circle distance between geolocated hosts) gives about other variables of interest. We then provide bounds on the error that can be expected when using statistical estimators for the variables of interest based on subsets of other variables.


international conference on communications | 2013

On the relationship between fundamental measurements in TCP flows

Richard G. Clegg; João Taveira Araújo; Raul Landa; Eleni Mykoniati; David Griffin; Miguel Rio

This paper considers fundamental measurements which drive TCP flows: throughput, RTT and loss. It is clear that throughput is, in some sense, a function of both RTT and loss. In their seminal paper Padyhe et al [1] begin with a mathematical model of the TCP sliding window evolution process and come up with an equation showing that TCP throughput is (roughly) proportional to 1/RTT√p where p is the probability of packet loss. Their equation is shown to be consistent with data gathered on several links. This paper takes the opposite approach and analyses a large number of packet traces from well-known sources in order to create a data-driven estimate of the functions which relate TCP, loss and RTT. Regression analysis is used to fit models to connect the quantities. The fitted models show different behaviour from that expected in [1].


Proceedings of the Re-Architecting the Internet Workshop on | 2010

A mutualistic resource pooling architecture

João Taveira Araújo; Miguel Rio; George Pavlou

Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. We defend that the inability to progress beyond a single path paradigm is due to an inflexible resource sharing model, rather than a lack of routing solutions. The tussle between networks and hosts over resource sharing has constricted resource pooling into being redefined by stakeholders according to their own needs, often at the expense of others. In this paper we debate existing approaches to resource pooling and present PREFLEX, an architecture where edge networks and hosts both share the burden and reap the rewards of balancing traffic over multiple paths. Using PREF (Path RE-Feedback), networks suggest outbound paths to hosts, who in turn use LEX (Loss Exposure) to signal transport layer semantics such as loss and flow start to the underlying network. By making apparent network preferences and transport expectations, PREFLEX provides a mutualistic framework where congestion control and traffic engineering can both coexist and evolve independently.


international ifip tc networking conference | 2011

Balancing by PREFLEX: congestion aware traffic engineering

João Taveira Araújo; Richard G. Clegg; Imad Grandi; Miguel Rio; George Pavlou

There has long been a need for a robust and reliable system which distributes traffic across multiple paths. In particular such a system must rarely reorder packets, must not require per-flow state, must cope with different paths having different bandwidths and must be self-tuning in a variety of network contexts. PREFLEX, proposed herein, uses estimates of loss rate to balance congestion. This paper describes a method of automatically adjusting how PREFLEX will split traffic in order to balance loss across multiple paths in a variety of network conditions. Equations are derived for the automatic tuning of the time scale and traffic split at a decision point. The algorithms described allow the load balancer to self-tune to network conditions. The calculations are simple and do not place a large burden on a router which would implement the algorithm. The algorithm is evaluated by simulation using ns-3 and is shown to perform well under a variety of circumstances. The resulting adaptive, end-to-end traffic balancing architecture provides the necessary framework to meet the increasing demands of users while simultaneously offering edge networks more fine-grained control at far shorter timescales.


measurement and modeling of computer systems | 2014

TARDIS: stably shifting traffic in space and time

Richard G. Clegg; Raul Landa; João Taveira Araújo; Eleni Mykoniati; David Griffin; Miguel Rio

This paper describes TARDIS (Traffic Assignment and Retiming Dynamics with Inherent Stability) which is an algorithmic procedure designed to reallocate traffic within Internet Service Provider (ISP) networks. Recent work has investigated the idea of shifting traffic in time (from peak to off-peak) or in space (by using different links). This work gives a unified scheme for both time and space shifting to reduce costs. Particular attention is given to the commonly used 95th percentile pricing scheme. The work has three main innovations: firstly, introducing the Shapley Gradient, a way of comparing traffic pricing between different links at different times of day; secondly, a unified way of reallocating traffic in time and/or in space; thirdly, a continuous approximation to this system is proved to be stable. A trace-driven investigation using data from two service providers shows that the algorithm can create large savings in transit costs even when only small proportions of the traffic can be shifted.


international conference on computer communications | 2014

A longitudinal analysis of Internet rate limitations

João Taveira Araújo; Raul Landa; Richard G. Clegg; George Pavlou; Kensuke Fukuda

TCP remains the dominant transport protocol for Internet traffic, but the preponderance of its congestion control mechanisms in determining flow throughput is often disputed. This paper analyzes the extent to which network, host and application settings define flow throughput over time and across autonomous systems. Drawing from a longitudinal study spanning five years of passive traces collected from a single transit link, our results show that continuing OS upgrades have reduced the influence of host limitations owing both to windowscale deployment, which by 2011 covered 80% of inbound traffic, and increased socket buffer sizes. On the other hand, we show that for this data set, approximately half of all inbound traffic remains throttled by constraints beyond network capacity, challenging the traditional model of congestion control in TCP traffic as governed primarily by loss and delay.


Computer Networks | 2017

On rate limitation mechanisms for TCP throughput

João Taveira Araújo; Raul Landa; Richard G. Clegg; George Pavlou; Kensuke Fukuda

TCP remains the dominant transport protocol for Internet traffic. It is usually considered to have its sending rate covered by a sliding window congestion control mechanism. However, in addition to this normal congestion control, a number of other mechanisms limit TCP throughput. This paper analyses the extent to which network, host and application settings define flow throughput over time and across autonomous systems. Our study draws on data from a longitudinal study spanning five years of passive traces collected from a single transit link. Mechanisms for this include limiting by application, interference with the TCP window control mechanism and artificial limitations on maximum window sizes by the operating system. This paper uses a large data set to assess the impact of each mechanism. We conclude that more than half of all heavy-hitter inbound traffic remains throttled by constraints beyond network capacity. For this data set, TCP congestion control is no longer the dominant mechanism that moderates throughput.

Collaboration


Dive into the João Taveira Araújo's collaboration.

Top Co-Authors

Avatar

Miguel Rio

University College London

View shared research outputs
Top Co-Authors

Avatar

Raul Landa

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Pavlou

University College London

View shared research outputs
Top Co-Authors

Avatar

David Griffin

University College London

View shared research outputs
Top Co-Authors

Avatar

Eleni Mykoniati

University College London

View shared research outputs
Top Co-Authors

Avatar

Kensuke Fukuda

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Felipe Huici

University College London

View shared research outputs
Top Co-Authors

Avatar

Imad Grandi

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge