Mitch Gusat
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mitch Gusat.
IEEE Communications Magazine | 2016
Marco Chiesa; Christoph Dietzel; Gianni Antichi; Marc Bruyere; Ignacio Castro; Mitch Gusat; Thomas King; Andrew W. Moore; Thanh Dang Nguyen; Philippe Owezarski; Steve Uhlig; Marco Canini
While innovation in inter-domain routing has remained stagnant for over a decade, Internet exchange points (IXPs) are consolidating their role as economically advantageous interconnection points for reducing path latencies and exchanging ever increasing amounts of traffic. As such, IXPs appear as a natural place to foster network innovation and assess the benefits of SDN, a recent technological trend that has already boosted innovation within data center networks. In this article, we give a comprehensive overview of use cases for SDN at IXPs, which leverage the superior vantage point of an IXP to introduce advanced features like load balancing and DDoS mitigation. We discuss the benefits of SDN solutions by analyzing real-world data from one of the largest IXPs. We also leverage insights into IXP operations to shape benefits not only for members but also for operators.
Proceedings of the 8th International Workshop on Interconnection Network Architecture | 2014
Nikolaos Chrysos; Fredy D. Neeser; Mitch Gusat; Cyriel Minkenberg; Wolfgang E. Denzel; Claude Basso
Performance optimized datacenters (PoDs) require efficient PoD interconnects to deal with the increasing volumes of inter-server (east-west) traffic. To cope with these stringent traffic patterns, datacenter networks are abandoning the oversubscribed topologies of the past, and move towards full-bisection fat-tree fabrics. However, these fabrics typically employ either single-path or coarse-grained (flow-level) multi-path routing. In this paper, we use computer simulations and analysis to characterize the waste of bandwidth that is due to routing inefficiencies. Our analysis suggests that, under a randomly selected permutation, the expected throughputs of d-mod-k routing and of flow-level multi-path routing are close to 63% and 47%, respectively. Furthermore, nearly 30% of the flows are expected to undergo an unnecessary 3-fold slowdown. By contrast, packet-level multi-path routing consistently delivers full throughput to all flows, and proactively avoids internal hotspots, thus serving better the growing demands of inter-server (east-west) traffic.
Concurrency and Computation: Practice and Experience | 2017
Pablo Fuentes; Mariano Benito; Enrique Vallejo; José Luis Bosque; Ramón Beivide; Andreea Anghel; German Rodriguez; Mitch Gusat; Cyriel Minkenberg; Mateo Valero
The Graph500 benchmark attempts to steer the design of High‐Performance Computing systems to maximize the performance under memory‐constricted application workloads. A realistic simulation of such benchmarks for architectural research is challenging due to size and detail limitations. By contrast, synthetic traffic workloads constitute one of the least resource‐consuming methods to evaluate the performance. In this work, we provide a simulation tool for network architects that need to evaluate the suitability of their interconnect for BigData applications. Our development is a low computation‐ and memory‐demanding synthetic traffic model that emulates the behavior of the Graph500 communications and is publicly available in an open‐source network simulator. The characterization of network traffic is inferred from a profile of several executions of the benchmark with different input parameters. We verify the validity of the equations in our model against an execution of the benchmark with a different set of parameters. Furthermore, we identify the impact of the node computation capabilities and network characteristics in the execution time of the model in a Dragonfly network.
Proceedings of the 2013 Interconnection Network Architecture: On-Chip, Multi-Chip on | 2013
Daniel Crisan; Robert Birke; Nikolaos Chrysos; Mitch Gusat
Key to the economic viability of clouds and datacenters is their elastic scalability. Therefore most active related research areas focus on the datacenter fabric scalability, efficiency, performance, virtualization, optimal virtual machine (VM) allocation and migration. Here we ask the questions: Given a set of tenant workloads running on generic servers interconnected by a 10--100G Ethernet fabric with modern network virtualization and transport protocols, how can the datacenter operator reach the optimal operation region? How is this optimum defined, traded between operator and tenants, and measured with what metrics? In this paper we propose an evaluation methodology and a set of simple, but descriptive, metrics as a first attempt to answer the questions raised above. As proof of concept, we investigate a multitenant virtualized datacenter network running a 3-tier workload. Our proposal enables a quantitative comparison between competing datacenter fabrics and virtualization architectures.
Proceedings of the 2013 Interconnection Network Architecture: On-Chip, Multi-Chip on | 2013
Nikolaos Chrysos; Fredy D. Neeser; Mitch Gusat; Rolf Clauberg; Cyriel Minkenberg; Claude Basso; Kenneth M. Valk
Network devices supporting above-100G links are needed today in order to scale communication bandwidth along with the processing capabilities of computing nodes in data centers and warehouse computers. In this paper, we propose a light-weight, fair scheduler for such ultra high-speed links, and an arbitrarily large number of requestors. We show that, in practice, our first algorithm, as well its predecessor, DRR, may result in bursty service even in the common case, where flow weights are approximately equal, and we identify applications where this can damage performance. Our second contribution is an enhancement that improves short-term fairness to deliver very smooth service when flow weights are approximately equal, whilst allocating bandwidth in a weighted fair manner.
international conference on computer communications | 2013
Daniel Crisan; Robert Birke; Cyriel Minkenberg; Mitch Gusat
Current hypervisor software drops packets in the virtual network. This behavior is suboptimal, wastes network resources and harms performance. We propose to extend the flow control mechanisms that currently exist only in physical networks to the virtual networks. Using a simple setup we show the advantages of losslessness in a virtualized environment.
Archive | 2012
Daniel Crisan; Casimer M. DeCusatis; Mitch Gusat; Cyriel Minkenberg
Archive | 2003
Francois Abel; Wolfgang E. Denzel; Antonius Engbersen; Ferdinand Gramsamer; Mitch Gusat; Ronald P. Luijten; Cyriel Minkenberg; Mark Verhappen
Archive | 2003
Michael Colmant; Alan F. Benner; Francois Abel; Michel Poret; Norbert Schumacher; Alain Blanc; Mark Verhappen; Mitch Gusat
Archive | 2013
Daniel Crisan; Casimer M. DeCusatis; Mitch Gusat; Cyriel Minkenberg