Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Levy-Abegnoli.
international conference on computer communications | 1999
Eric Levy-Abegnoli; Arun Iyengar; Junehwa Song; Daniel M. Dias
We describe the design, implementation and performance of a Web server accelerator which runs on an embedded operating system and improves Web server performance by caching data. The accelerator resides in front of one or more Web servers. Our accelerator can serve up to 5000 pages/second from its cache on a 200 MHz PowerPC 604. This throughput is an order of magnitude higher than that which would be achieved by a high-performance Web server running on similar hardware under a conventional operating system such as Unix or NT. The superior performance of our system results in part from its highly optimized communications stack. In order to maximize hit rates and maintain updated caches, our accelerator provides an API which allows application programs to explicitly add, delete, and update cached data. The API allows our accelerator to cache dynamic as well as static data, analyze the SPECweb96 benchmark, and show that the accelerator can provide high hit ratios and excellent performance for workloads similar to this benchmark.
international symposium on performance analysis of systems and software | 2000
Junehwa Song; Eric Levy-Abegnoli; Arun Iyengar; Daniel M. Dias
We study design alternatives for, and describe implementations and performance of, a scalable and highly available Web server accelerator. The accelerator runs under an embedded operating system and improves Web server performance by caching data. The basic design alternatives include a content router or a TCP router (without content routing) in front of a set of Web cache accelerator nodes, with the cache memory distributed across the accelerator nodes. Content based routing reduces cache node CPU cycles but can make the front-end router a bottleneck. With the TCP router, a request for a cached object may initially be sent to the wrong cache node; this results in larger cache node CPU cycles, but can provide a higher aggregate throughput, because the TCP router becomes a bottleneck at a higher throughput than the content router. Based on measurement of implementations, we quantify the throughput ranges in which different designs are preferable. We also examine a combination of content based and TCP routing techniques. We examine optimizations, such as different communication and data delivery methods, replication of hot objects, and cache replacement policies that take into account the fact that there might be different bottlenecks in the system at different times; depending upon which resource is likely to become a bottleneck, a different cache replacement algorithm is applied.
Computer Networks | 2002
Junehwa Song; Arun Iyengar; Eric Levy-Abegnoli; Daniel M. Dias
Abstract We describe the design, implementation and performance of a high-performance Web server accelerator which runs on an embedded operating system and improves Web server performance by caching data. It can serve Web data at rates an order of magnitude higher than that which would be achieved by a high-performance Web server running on similar hardware under a conventional operating system such as Unix or NT. The superior performance of our system results in part from its highly optimized communications stack. In order to maximize hit rates and maintain updated caches, our accelerator provides an API which allows application programs to explicitly add, delete, and update cached data. The API allows our accelerator to cache dynamic as well as static data. We describe how our accelerator can be scaled to multiple processors to increase performance and availability. The basic design alternatives include a content router or a TCP router (without content routing) in front of a set of Web cache accelerator nodes, with the cache memory distributed across the accelerator nodes. Content-based routing reduces cache node CPU cycles but can make the front-end router a bottleneck. With the TCP router, a request for a cached object may initially be sent to the wrong cache node; this results in larger cache node CPU cycles, but can provide a higher aggregate throughput, because the TCP router becomes a bottleneck at a higher throughput than the content router. We quantify the throughput ranges in which different designs are preferable. We also examine a combination of content-based and TCP routing techniques. In addition, we present statistics from critical deployments of our accelerator for improving performance at highly accessed Sporting and Event Web sites hosted by IBM.
Archive | 1997
Michael E. Baskey; Donna N. Dillenberger; Germán S. Goldszmidt; Guerney D. H. Hunt; Eric Levy-Abegnoli; Jeffrey M. Nick; Donald W. Schmidt
Archive | 1998
Ghislaine Couland; Guerney D. H. Hunt; Eric Levy-Abegnoli; Daniel Mauduit
Archive | 1999
Marc Lamberton; Eric Levy-Abegnoli; Eric Montagnon; Pascal Thubert
Archive | 1996
Olivier Bertin; Eric Levy-Abegnoli
Archive | 2001
Eric Levy-Abegnoli; Pascal Thubert
Archive | 2001
James R. H. Challenger; Paul M. Dantzig; Daniel M. Dias; Arun Iyengar; Eric Levy-Abegnoli
Archive | 2001
Marc Lamberton; Eric Levy-Abegnoli; Pascal Thubert