Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Welzl is active.

Publication


Featured researches published by Michael Welzl.


IEEE Communications Surveys and Tutorials | 2016

Reducing Internet Latency: A Survey of Techniques and Their Merits

Bob Briscoe; Anna Brunstrom; Andreas Petlund; David A. Hayes; David Ros; Ing-Jyh Tsang; Stein Gjessing; Gorry Fairhurst; Carsten Griwodz; Michael Welzl

Latency is increasingly becoming a performance bottleneck for Internet Protocol (IP) networks, but historically, networks have been designed with aims of maximizing throughput and utilization. This paper offers a broad survey of techniques aimed at tackling latency in the literature up to August 2014, as well as their merits. A goal of this work is to be able to quantify and compare the merits of the different Internet latency reducing techniques, contrasting their gains in delay reduction versus the pain required to implement and deploy them. We found that classifying techniques according to the sources of delay they alleviate provided the best insight into the following issues: 1) The structural arrangement of a network, such as placement of servers and suboptimal routes, can contribute significantly to latency; 2) each interaction between communicating endpoints adds a Round Trip Time (RTT) to latency, particularly significant for short flows; 3) in addition to base propagation delay, several sources of delay accumulate along transmission paths, today intermittently dominated by queuing delays; 4) it takes time to sense and use available capacity, with overuse inflicting latency on other flows sharing the capacity; and 5) within end systems, delay sources include operating system buffering, head-of-line blocking, and hardware interaction. No single source of delay dominates in all cases, and many of these sources are spasmodic and highly variable. Solutions addressing these sources often both reduce the overall latency and make it more predictable.


international conference on information technology | 2007

A Fault tolerant mechanism for handling Permanent and Transient Failures in a Network on Chip

Muhammad Ali; Michael Welzl; Sven Hessler; Sybille Hellebrand

Network on chips (NoC) have emerged as a feasible solution to handle growing number of communicating components on a single chip. The scalability of chips however increases the probability of errors, hence making reliability a major issue in scaling chips. We hereby propose a comprehensive fault tolerant mechanism for packet based NoCs to deal with packet losses or corruption due to transient faults as well as a dynamic routing mechanism to deal with permanent link and/or router failure on-chip


IEEE Communications Magazine | 2003

Scalability and quality of service: a trade-off?

Michael Welzl; Max Mühlhäuser

During the last decade, two big efforts on Internet quality of service were made. The first, IntServ, promises precise per-flow service provisioning but never really made it as a commercial end-user product, which was mainly accredited to its lack of scalability. Its successor, DiffServ, is more scalable at the cost of coarser service granularity - which may be the reason why it is not yet commercially available to end users either. This leaves us with the question: is there a fundamental trade-off between QoS and scalability? A trade-off that, in the long run, could prevent deployment of QoS for end users altogether?.


IEEE Communications Surveys and Tutorials | 2013

Less-than-Best-Effort Service: A Survey of End-to-End Approaches

David Ros; Michael Welzl

This paper provides a survey of transport protocols and congestion control mechanisms that are designed to have a smaller bandwidth and/or delay impact on standard TCP than standard TCP itself when they share a bottleneck with it. Such protocols and mechanisms provide what is sometimes called a less-than-best-effort or lower than best-effort service. To a user, such a service can, for instance, be an attractive choice for applications which create traffic that is considered less urgent than that of others—e.g., automatic backup, software updates running in the background, or peer-to-peer applications. The focus of this survey is on end-host approaches for achieving a less-than-best-effort service. This includes e.g. upper-layer methods, or techniques that leverage standard transport-layer mechanisms so as to reduce the impact on other competing flows.


Networking Conference, 2014 IFIP | 2014

Can SPDY really make the web faster

Yehia Elkhatib; Gareth Tyson; Michael Welzl

HTTP is a successful Internet technology on top of which a lot of the web resides. However, limitations with its current specification have encouraged some to look for the next generation of HTTP. In SPDY, Google has come up with such a proposal that has growing community acceptance, especially after being adopted by the IETF HTTPbis-WG as the basis for HTTP/2.0. SPDY has the potential to greatly improve web experience with little deployment overhead, but we still lack an understanding of its true potential in different environments. This paper offers a comprehensive evaluation of SPDYs performance using extensive experiments. We identify the impact of network characteristics and website infrastructure on SPDYs potential page loading benefits, finding that these factors are decisive for an optimal SPDY deployment strategy. Through exploring such key aspects that affect SPDY, and accordingly HTTP/2.0, we feed into the wider debate regarding the impact of future protocols.


International Journal of High Performance Systems Architecture | 2007

An efficient fault tolerant mechanism to deal with permanent and transient failures in a network on chip

Muhammad Ali; Michael Welzl; Sven Hessler; Sybille Hellebrand

Recent advances in the silicon technology is enabling the VLSI chips to accommodate billions of transistors; leading toward incorporating hundreds of heterogeneous components on a single chip. However, it has been observed that the scalability of chips is posing grave problems for the current interconnect architecture which is unable to cope with the growing number of components on a chip. To remedy the inefficiency of buses, researchers have explored the area of computer networks besides exploring parallel computing to come up with viable solutions for billion transistor chips. The outcome is a novel and scalable communication paradigm for future System on Chips (SoCs) called as Network on Chips (NoC). However, as the chip scales, the probability of both permanent and temporary faults is also increasing, making Fault Tolerance (FT) a key concern in scaling chips. Alpha particle emissions, Gaussian noise on channels are some of the reasons which introduce transient faults in the data. Besides that, due to electromigration of conductors, corrosion or aging factors, on-chip modules or links may suffer permanent damage. This paper proposes a comprehensive solution to deal with both permanent and transient errors affecting the VLSI chips. On the one hand we present an efficient packet retransmission mechanism to deal with packet corruption or loss due to transient faults. On the other hand, we propose a deterministic routing mechanism which routes packets on alternate paths when a communication link or a router suffers permanent failure.


acm special interest group on data communication | 2009

MulTFRC: providing weighted fairness for multimediaapplications (and others too!)

Dragana Damjanovic; Michael Welzl

When data transfers to or from a host happen in parallel, users do not always consider them to have the same importance. Ideally, a transport protocol should therefore allow its users to manipulate the fairness among flows in an almost arbitrary fashion. Since data transfers can also include real-time media streams which need to keep delay | and hence buffers | small, the protocol should also have a smooth sending rate. In an effort to satisfy the above requirements, we present MulTFRC, a congestion control mechanism which is based on the TCP-friendly Rate Control (TFRC) protocol. It emulates the behavior of a number of TFRC flows while maintaining a smooth sending rate. Our simulations and a real-life test demonstrate that MulTFRC performs significantly better than its competitors, potentially making it applicable in a broader range of settings than what TFRC is normally associated with.


systems communications | 2008

Networks on Chips: Scalable interconnects for future systems on chips

Muhammad Ali; Michael Welzl; Martin Zwicknagl

According to the International Technology Roadmap for Semiconductors (ITRS), before the end of this decade we will be entering the era of a billion transistors on a single chip. It is being stated that soon we will have a chip of 50-100 nm comprising around 4 billion transistors operating at a frequency of 10 Ghz. Such a development means that in the near future we probably have devices with such complex functions ranging from mere mobile phones to mobile devices controlling satellite functions. But developing such kind of chips is not an easy task as the number of transistors increases on-chip, and so does the complexity of integrating them. Todaypsilas SoCs use shared or dedicated buses to interconnect the communicating on-chip resources. However, these buses are not scalable beyond a certain limit. In this case, the current interconnect infrastructure will become a bottleneck for the development of billion transistor chips. Hence, in this tutorial, we will try to highlight a new design paradigm that has been proposed to counter the inefficiency of buses in future SoCs. This new design paradigm has been termed with a variety of titles, but the most common and agreed upon one is networks on chips (NoCs). We will show that how this paradigm shift from ordinary buses to networks on chips can make the kind of SoCs mentioned above very much possible.


EURASIP Journal on Advances in Signal Processing | 2005

Passing corrupt data across network layers: an overview of recent developments and issues

Michael Welzl

Recent Internet developments seem to make a point for passing corrupt data from the link to the network layer and above instead of ensuring data integrity with a checksum and ARQ. We give an overview of these efforts (the UDP Lite and DCCP protocols) and explain which circumstances would justify delivery of erroneous data; clearly, the missing piece in the puzzle is efficient and meaningful interlayer communication.


international conference on computer communications | 2014

The New AQM Kids on the Block: An Experimental Evaluation of CoDel and PIE

Naeem Khademi; David Ros; Michael Welzl

Active Queue Management (AQM) design has again come into the spotlight of network operators, vendors and OS developers. This reflects the growing concern and sensitivity about the end-to-end latency perceived by todays Internet users. CoDel and PIE are two AQM mechanisms that have recently been presented and discussed in the IRTF and the IETF as solutions for keeping latency low. To the best of our knowledge, they have so far only been evaluated or compared against each other using default parameter settings, which naturally presents a rather limited view of their operational range. We set thus to perform a broader experimental evaluation using real-world implementations in a wired testbed. We have in addition compared them with a decade-old variant of RED called Adaptive RED, which shares with CoDel and PIE the goal of “knob-free” operation. Surprisingly, in several instances results were favorable towards Adaptive RED.

Collaboration


Dive into the Michael Welzl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Hayes

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muhammad Ali

University of Innsbruck

View shared research outputs
Top Co-Authors

Avatar

Grenville J. Armitage

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

David Ros

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Mühlhäuser

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge