Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan F. Benner is active.

Publication


Featured researches published by Alan F. Benner.


conference on high performance computing (supercomputing) | 2005

On the Feasibility of Optical Circuit Switching for High Performance Computing Systems

Kevin J. Barker; Alan F. Benner; Raymond R. Hoare; Adolfy Hoisie; Darren J. Kerbyson; Dan Li; Rami G. Melhem; Ramakrishnan Rajamony; Eugen Schenfeld; Shuyi Shao; Craig B. Stunkel; Peter A. Walker

The interconnect plays a key role in both the cost and performance of large-scale HPC systems. The cost of future high-bandwidth electronic interconnects mushrooms due to expensive optical transceivers needed between electronic switches. We describe a potentially cheaper and more power-efficient approach to building high-performance interconnects. Through empirical analysis of HPC applications, we find that the bulk of inter-processor communication (barring collectives) is bounded in degree and changes very slowly or never. Thus we propose a two-network interconnect: An Optical Circuit Switching (OCS) network handling long-lived bulk data transfers, using optical switches; and a secondary lower-bandwidth Electronic Packet Switching (EPS) network. An OCS could be significantly cheaper, as it uses fewer optical transceivers than an electronic network. Collectives and transient communication packets traverse the electronic network. We present compiler techniques and dynamic run-time policies, using this two-network interconnect. Simulation results show that our approach provides high performance at low cost.


optical fiber communication conference | 2011

Optical interconnects in future servers

Jeffrey A. Kash; Alan F. Benner; Fuad E. Doany; Daniel M. Kuchta; Benjamin G. Lee; Petar Pepeljugoski; Laurent Schares; Clint L. Schow; Marc A. Taubenblatt

Optical interconnects are common in todays petascale supercomputers, and will become pervasive at the exascale during this decade. Technologies that can meet the challenging technological and economic requirements for the exascale will be reviewed.


IEEE Communications Magazine | 2007

A Roadmap to 100G Ethernet at the enterprise data center

Alan F. Benner; Petar Pepeljugoski; Renato J. Recio

Ethernet networks operating at (100+100) Gb/s per link are showing enough technical and market viability that intense development on technical specifications and component technologies is under way. However, the route to 100 Gb/s Ethernet products is still not completely clear because the market and applications for 100 Gb/s links are quite different than for 1 Gb/s and even 10 Gb/s links. This article attempts to provide a roadmap for adoption of 100 Gb/s Ethernet in enterprise data centers, outlining the features that will affect the schedule, as well as the capabilities of Ethernet gear as it is developed. The major opportunities for 100 Gb/s Ethernet appear to be primarily oriented toward server, rather than desktop/client, applications, and toward interconnecting various types of high- performance computing gear for technical and business analytic computing, as well as for media-oriented and web applications in content development and delivery. These applications will place requirements on the timeline and technical definition of networking gear operating at 40 Gb/s and 100 Gb/s.


high performance interconnects | 2010

Optics in Future Data Center Networks

Laurent Schares; Daniel M. Kuchta; Alan F. Benner

Optical interconnects offer significant advantages for future high performance data center networks. Progress towards integrating new optical technologies deeper into systems is reviewed, and the prospects for optical architectures beyond point-to-point optical links are discussed.


Journal of Instrumentation | 2011

Optical technologies for data communication in large parallel systems

Mark B. Ritter; Y Vlasov; Jeffrey A. Kash; Alan F. Benner

Large, parallel systems have greatly aided scientific computation and data collection, but performance scaling now relies on chip and system-level parallelism. This has happened because power density limits have caused processor frequency growth to stagnate, driving the new multi-core architecture paradigm, which would seem to provide generations of performance increases as transistors scale. However, this paradigm will be constrained by electrical I/O bandwidth limits; first off the processor card, then off the processor module itself. We will present best-estimates of these limits, then show how optical technologies can help provide more bandwidth to allow continued system scaling. We will describe the current status of optical transceiver technology which is already being used to exceed off-board electrical bandwidth limits, then present work on silicon nanophotonic transceivers and 3D integration technologies which, taken together, promise to allow further increases in off-module and off-card bandwidth. Finally, we will show estimated limits of nanophotonic links and discuss breakthroughs that are needed for further progress, and will speculate on whether we will reach Exascale-class machine performance at affordable powers.


european conference on optical communication | 2010

Towards exaflop servers and supercomputers: The roadmap for lower power and higher density optical interconnects

Petar Pepeljugoski; Jeffrey A. Kash; Fuad E. Doany; Daniel M. Kuchta; Laurent Schares; Clint L. Schow; Marc A. Taubenblatt; Bert Jan Offrein; Alan F. Benner

In the last 10 years interconnects in many high performance servers and supercomputers transitioned from copper interconnects to optical interconnects. In this presentation a technological roadmap towards will be reviewed, focusing on the evolution of interconnect power and density efficiencies.


Photonics | 2010

Optical interconnects in exascale supercomputers

Jeffrey A. Kash; Alan F. Benner; Fuad E. Doany; Daniel M. Kuchta; Benjamin G. Lee; Petar Pepeljugoski; Laurent Schares; Clint L. Schow; Marc A. Taubenblatt

Todays petascale supercomputers make substantial use of optical interconnects. For the exascale, optics will become pervasive, but must meet challenging technological and economic requirements. The optics technologies that can meet these requirements are reviewed.


Cluster Computing | 2003

InfiniBand: The “De Facto” Future Standard for System and Local Area Networks or Just a Scalable Replacement for PCI Buses?

Timothy Mark Pinkston; Alan F. Benner; Michael Krause; Irv M. Robinson; Thomas L. Sterling

InfiniBand is a new industry-wide general-purpose interconnect standard designed to provide significantly higher levels of reliability, availability, performance, and scalability than alternative server I/O technologies. After more than two years since its official release, many are still trying to understand what are the profitable uses for this new and promising interconnect technology, and how this technology might evolve. In this article, we provide a summary of several industry and academic perspectives on this issue expressed during a panel discussion at the Workshop for Communication Architecture for Clusters (CAC), held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS) in April 2001, in hopes of narrowing down the design space for InfiniBand-based systems.


high performance interconnects | 2009

Cost-Effective Optics: Enabling the Exascale Roadmap

Alan F. Benner

This paper examines the impact of cost-effective optical interconnect technologies for petascale to exascale clusters, supercomputing systems, and large-scale data centers.


Handbook of Fiber Optic Data Communication (Second Edition) | 2002

Fibre Channel Standard

Alan F. Benner

This chapter emphasizes fiber channel in detail. It describes the current main application area for fiber channel. Further, the chapter provides an insight of how fiber channels, in contrast with other network architectures, leverages the advantages of high-speed, high-reliability optical technology to provide high overall data communications performance. A fiber channel network is made up of one or more bidirectional point-to-point serial data channels, structured for high-performance capability. The basic data rate over the links is just over 1 Gbps, thus providing > 100 MBps data transmission bandwidth, with half, quarter, eighth, double, and quadruple-speed links defined along with 10 Gbps links under development. It is noted that the fiber channel protocol is configured to match the transmission and technological characteristics of single and multimode optical fibers, but the physical medium used for transmission can also be copper twisted pair or coaxial cable. Fiber channel is structured as a set of hierarchical functions. Interfaces among the levels are defined, but vendors are not just limited to specific interfaces among levels if multiple levels are implemented together.

Researchain Logo
Decentralizing Knowledge