Robert Birke
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Birke.
international conference on communications | 2010
Andrea Bianco; Robert Birke; Luca Giraudo; M. Palacin
OpenFlow is an open standard that can be implemented in Ethernet switches, routers and wireless access points (AP). In the OpenFlow framework, packet forwarding (data plane) and routing decisions (control plane) run on different devices. OpenFlow switches are in charge of packet forwarding, whereas a controller set up switch forwarding table on a per-flow basis, to enable flow isolation and resource slicing. We focus on the data path and analyze the OpenFlow implementation in Linux based PCs. We compare OpenFlow switching, layer-2 Ethernet switching and layer-3 IP routing performance. Forwarding throughput and packet latency in underloaded and overloaded conditions are analyzed, with different traffic patterns. System scalability is analyzed using different forwarding table sizes, and fairness in resource distribution is measured.
IEEE Communications Magazine | 2011
Robert Birke; Emilio Leonardi; Marco Mellia; Arpad Bakay; Tivadar Szemethy; Csaba Kiraly; Renato Lo Cigno; Fabien Mathieu; Luca Muscariello; Saverio Niccolini; Jan Seedorf; Giuseppe Tropea
Peer to Peer streaming (P2P-TV) applications have recently emerged as cheap and efficient solutions to provide real time streaming services over the Internet. For the sake of simplicity, typical P2P-TV systems are designed and optimized following a pure layered approach, thus ignoring the effect of design choices on the underlying transport network. This simple approach, however, may constitute a threat for the network providers, due to the congestion that P2P-TV traffic can potentially generate. In this article, we present and discuss the architecture of an innovative, network cooperative P2PTV application that is being designed and developed within the STREP Project NAPA WINE1 Our application is explicitly targeted to favor cooperation between the application and the transport network layer.
high performance switching and routing | 2005
Andrea Bianco; Robert Birke; Davide Bolognesi; Jorge M. Finochietto; Giulio Galante; Marco Mellia; M.L.N.P.P. Prashant; Fabio Neri
Software routers based on off-the-shelf hardware and open-source operating systems are gaining more and more momentum. The reasons are manifold: first, personal computer (PC) hardware is broadly available at low cost; second, large-scale production and the huge market spur the manufacturers to closely track the improvements made available by Moores Law; third, open-source software leaves the freedom to study the source code, learn from it, modify it to improve the performance, and tailor its operation to ones own needs. In this paper we focus only on the data plane performance and compare the default Linux IP stack with the Click modular IP stack in terms of the forwarding throughput. The results are surprising and show that a high-end PC is easily able to fit into the multi-Gigabit-per-second routing segment, for a price much lower than commercial routers.
dependable systems and networks | 2013
Robert Birke; Andrej Podzimek; Lydia Y. Chen; Evgenia Smirni
Hardware virtualization is the prevalent way to share data centers among different tenants. In this paper we present a large scale workload characterization study that aims to a better understanding of the state-of-the-practice, i.e., how data centers in the private cloud are used by their customers, how physical resources are shared among different tenants using virtualization, and how virtualization technologies are actually employed. Our study focuses on all corporate data centers of a major infrastructure provider that are geographically dispersed across the entire globe and reports on their observed usage across a 19-day period. We especially focus on how virtual machines are deployed across different physical resources with an emphasis on processors and memory, focusing on resource sharing and usage of physical resources, virtual machine life cycles, and migration patterns and frequencies. Our study illustrates that there is a huge tendency in over provisioning resources while being conservative to the several possibilities opened up by virtualization (e.g., migration and co-location), showing tremendous potential for the development of policies aiming to reduce data center operational costs.
international conference on cloud computing | 2012
Robert Birke; Lydia Y. Chen; Evgenia Smirni
With the advancement of virtualization technologies and the benefit of economies of scale, industries are seeking scalable IT solutions, such as data centers hosted either in-house or by a third party. Data center availability, often via a cloud setting, is ubiquitous. Nonetheless, little is known about the in-production performance of data centers, and especially the interaction of workload demands and resource availability. This study fills this gap by conducting a large scale survey of in-production data center servers within a time period that spans two years. We provide in-depth analysis on the time evolution of existing data center demands by providing a holistic characterization of typical data center server workloads, by focusing on their basic resource components, including CPU, memory, and storage systems. We especially focus on seasonality of resource demands and how this is affected by different geographical locations. This survey provides a glimpse on the evolution of data center workloads and provides a basis for an economics analysis that can be used for effective capacity planning of future data centers.
high performance interconnects | 2011
Daniel Crisan; Andreea Anghel; Robert Birke; Cyriel Minkenberg; Mitchell Gusat
One of the consequential new features of emerging datacenter networks is lossless ness, achieved by means of Priority Flow Control (PFC). Despite PFCs key role in the datacenter and its increasing availability -- supported by virtually all Converged Enhanced Ethernet (CEE) products -- its impact remains largely unknown. This has motivated us to evaluate the sensitivity of three widespread TCP versions to PFC, as well as to the more involved Quantized Congestion Notification (QCN) congestion management mechanism. As datacenter workloads we have adopted several representative commercial and scientific applications. For evaluation we employ an accurate Layer 2 CEE network simulator coupled with a TCP implementation extracted from FreeBSD v9. A somewhat unexpected outcome of this investigation is that PFC significantly improves TCP performance across all tested configurations and workloads, hence our recommendation to enable PFC whenever possible. In contrast, QCN can help or harm depending on its parameter settings, which are currently neither adaptive nor universal for datacenters. To the best of our knowledge this is the first performance evaluation of TCP performance in lossless CEE networks.
dependable systems and networks | 2014
Robert Birke; Ioana Giurgiu; Lydia Y. Chen; Dorothea Wiesmann; Ton Engbersen
In todays commercial data centers, the computation density grows continuously as the number of hardware components and workloads in units of virtual machines increase. The service availability guaranteed by data centers heavily depends on the reliability of the physical and virtual servers. In this study, we conduct an analysis on 10K virtual and physical machines hosted on five commercial data centers over an observation period of one year. Our objective is to establish a sound understanding of the differences and similarities between failures of physical and virtual machines. We first capture their failure patterns, i.e., the failure rates, the distributions of times between failures and of repair times, as well as, the time and space dependency of failures. Moreover, we correlate failures with the resource capacity and run-time usage to identify the characteristics of failing servers. Finally, we discuss how virtual machine management actions, i.e., consolidation and on/off frequency, impact virtual machine failures.
International Journal of Network Management | 2010
Robert Birke; Marco Mellia; Michele Petracca; Dario Ross
VoIP (Voice over IP) has widely been addressed as the technology that will change the telecommunication model, opening the path for convergence. Yet this revolution is far from being complete, since, as of today the majority of telephone calls are still originated by circuit-oriented networks. In this paper we present our experience in the real-time monitoring of VoIP calls from a commercial operational network. We discuss and present a methodology and a large dataset of measurements, collected from the FastWeb backbone, which is one of the first worldwide Telecom operators to offer VoIP and high-speed data access to the end-user. Traffic characterization focuses on several layers, concentrating on both end-user and ISP perspectives. In particular, we highlight that, among loss, delay and jitter, only the first index may affect VoIP call quality. Overall, results show that the technology is mature enough to make the final step, allowing the integration of data and real-time services over the Internet. Copyright
international conference on peer-to-peer computing | 2012
Stefano Traverso; Luca Abeni; Robert Birke; C. Kiraly; Emilio Leonardi; R. Lo Cigno; Marco Mellia
P2P-TV systems performance are driven by the overlay topology that peers form. Several proposals have been made in the past to optimize it, yet little experimental studies have corroborated results. The aim of this work is to provide a comprehensive experimental comparison of different strategies for the construction and maintenance of the overlay topology in P2P-TV systems. To this goal, we have implemented different fully-distributed strategies in a P2P-TV application, called PeerStreamer, that we use to run extensive experimental campaigns in a completely controlled set-up which involves thousands of peers, spanning very different networking scenarios. Results show that the topological properties of the overlay have a deep impact on both user quality of experience and network load. Strategies based solely on random peer selection are greatly outperformed by smart, yet simple strategies that can be implemented with negligible overhead. Even with different and complex scenarios, the neighborhood filtering strategy we devised as most performing guarantees to deliver almost all chunks to all peers with a play-out delay as low as only 6s even with system loads close to 1.0. Results are confirmed by running experiments on PlanetLab. PeerStreamer is open-source to make results reproducible and allow further research by the community.
high performance switching and routing | 2008
Andrea Bianco; Robert Birke; Jorge M. Finochietto; Luca Giraudo; F. Marenco; Marco Mellia; A. Khan; D. Manjunath
Software routers based on personal computer (PC) architectures are receiving increasing attention in the research community. However, a router based on a single PC suffers from limited bus and central processing unit (CPU) bandwidth, high memory access latency, limited scalability in terms of number of network interface cards, and lack of resilience mechanisms. Multi-stage architectures created by interconnecting several PCs are an interesting alternative since they allow to i) increase the performance of single-software routers, ii) scale router size, iii) distribute packet-forwarding and control functionalities, iv) recover from single-component failures, and v) incrementally upgrade router performance. However, a crucial issue is to hide the internal details of the interconnected architecture so that the architecture behaves externally as a single router, especially when considering the control and the management plane. In this paper, we describe a control protocol for a previously proposed multi-stage architecture based on PC interconnection. The protocol permits information exchange among internal PCs to support: i) configuration of the interconnected architecture, ii) packet forwarding, iii) routing table distribution, iv) management of the internal devices. The protocol is operating system independent, since it interacts with software routing suites such as Quagga and Xorp, and it is under test in our labs on a small-scale prototype of the multi-stage router.