Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arthur A. Bright is active.

Publication


Featured researches published by Arthur A. Bright.


Applied Physics Letters | 1991

Low-rate plasma oxidation of Si in a dilute oxygen/helium plasma for low-temperature gate quality Si/SiO2 interfaces

Arthur A. Bright; J. Batey; E. Tierney

Low‐rate plasma oxidation of Si, involving a small oxygen concentration in a low‐power He plasma at low processing temperatures (∼350 °C), is shown capable of producing excellent interface properties, good uniformity, and low defect density. As an interfacial layer for plasma‐enhanced chemical vapor deposited (PECVD) SiO2 films, the plasma oxide is key to achieving high quality composite (plasma oxide/PECVD) oxide structures, which essentially match the electrical quality of thermal oxides. Such low‐temperature oxide films are suitable for critical device applications, such as the gate oxide in metal‐oxide‐semiconductor devices and the base passivation layer in advanced bipolar devices.


International Journal of Parallel Programming | 2007

The blue gene/L supercomputer: a hardware and software story

José E. Moreira; Valentina Salapura; George S. Almasi; Charles J. Archer; Ralph Bellofatto; Peter Edward Bergner; Randy Bickford; Matthias A. Blumrich; José R. Brunheroto; Arthur A. Bright; Michael Brian Brutman; José G. Castaños; Dong Chen; Paul W. Coteus; Paul G. Crumley; Sam Ellis; Thomas Eugene Engelsiepen; Alan Gara; Mark E. Giampapa; Tom Gooding; Shawn A. Hall; Ruud A. Haring; Roger L. Haskin; Philip Heidelberger; Dirk Hoenicke; Todd A. Inglett; Gerard V. Kopcsay; Derek Lieber; David Roy Limpert; Patrick Joseph McCarthy

The Blue Gene/L system at the Department of Energy Lawrence Livermore National Laboratory in Livermore, California is the world’s most powerful supercomputer. It has achieved groundbreaking performance in both standard benchmarks as well as real scientific applications. In that process, it has enabled new science that simply could not be done before. Blue Gene/L was developed by a relatively small team of dedicated scientists and engineers. This article is both a description of the Blue Gene/L supercomputer as well as an account of how that system was designed, developed, and delivered. It reports on the technical characteristics of the system that made it possible to build such a powerful supercomputer. It also reports on how teams across the world worked around the clock to accomplish this milestone of high-performance computing.


computing frontiers | 2005

Power and performance optimization at the system level

Valentina Salapura; Randy Bickford; Matthias A. Blumrich; Arthur A. Bright; Dong Chen; Paul W. Coteus; Alan Gara; Mark E. Giampapa; Michael Karl Gschwind; Manish Gupta; Shawn A. Hall; Ruud A. Haring; Philip Heidelberger; Dirk Hoenicke; Gerard V. Kopcsay; Martin Ohmacht; Rick A. Rand; Todd E. Takken; Pavlos M. Vranas

The BlueGene/L supercomputer has been designed with a focus on power/performance efficiency to achieve high application performance under the thermal constraints of common data centers. To achieve this goal, emphasis was put on system solutions to engineer a power-efficient system. To exploit thread level parallelism, the BlueGene/L system can scale to 64 racks with a total of 65536 computer nodes consisting of a single compute ASIC integrating all system functions with two industry-standard PowerPC microprocessor cores in a chip multiprocessor configuration. Each PowerPC processor exploits data-level parallelism with a high-performance SIMD oating point unitTo support good application scaling on such a massive system, special emphasis was put on efficient communication primitives by including five highly optimized communification networks. After an initial introduction of the Blue-Gene/L system architecture, we analyze power/performance efficiency for the BlueGene system using performance and power characteristics for the overall system performance (as exemplified by peak performance numbers.To understand application scaling behavior, and its impact on performance and power/performance efficiency, we analyze the NAMD molecular dynamics package using the ApoA1 benchmark. We find that even for strong scaling problems, BlueGene/L systems can deliver superior performance scaling and deliver significant power/performance efficiency. Application benchmark power/performance scaling for the voltage-invariant energy delay 2 power/performance metric demonstrates that choosing a power-efficient 700MHz embedded PowerPC processor core and relying on application parallelism was the right decision to build a powerful, and power/performance efficient system


international solid-state circuits conference | 2005

Creating the BlueGene/L supercomputer from low-power SoC ASICs

Arthur A. Bright; Matthew R. Ellavsky; Alan Gara; Ruud A. Haring; Gerard V. Kopcsay; Robert F. Lembach; James A. Marcella; Martin Ohmacht; Valentina Salapura

An overview of the design aspects of the BlueGene/L chip, the heart of the BlueGene/L supercomputer, is presented. Following an SoC approach, processors, memory and communication subsystems are integrated into one low-power chip. The high-density system packaging of the BlueGene/L system provides better power and cost performance.


Applied Physics Letters | 1985

Doping effects in reactive plasma etching of heavily doped silicon

Young Hoon Lee; Mao‐Min Chen; Arthur A. Bright

Etch rates of heavily doped silicon films (n and p type) and undoped polycrystalline silicon film were studied during plasma etching and also during reaction ion etching in a CF4/O2 plasma. The etch rate of undoped Si was lower than the n+‐Si etch rate, but higher than the p+‐Si etch rate, when the rf inductive heating by the eddy current was minimized by using thermal backing to the water‐cooled electrode. This doping effect may be explained by the opposite polarity of the space charge present in the depletion layer of n+‐Si and p+‐Si during reactive plasma etching.


Ibm Journal of Research and Development | 2005

Blue Gene/L compute chip: control, test, and bring-up infrastructure

Ruud A. Haring; Ralph Bellofatto; Arthur A. Bright; Paul G. Crumley; Marc Boris Dombrowa; Steve M. Douskey; Matthew R. Ellavsky; Balaji Gopalsamy; Dirk Hoenicke; Thomas A. Liebsch; James A. Marcella; Martin Ohmacht

The Blue Gene®/L compute (BLC) and Blue Gene/L link (BLL) chips have extensive facilities for control, bring-up, self-test, debug, and nonintrusive performance monitoring built on a serial interface compliant with IEEE Standard 1149.1. Both the BLL and the BLC chips contain a standard eServer™ chip JTAG controller called the access macro. For BLC, the capabilities of the access macro were extended 1) to accommodate the secondary JTAG controllers built into embedded PowerPC® cores; 2) to provide direct access to memory for initial boot code load and for messaging between the service node and the BLC chip; 3) to provide nonintrusive access to device control registers; and 4) to provide a suite of chip configuration and control registers. The BLC clock tree structure is described. It accommodates both functional requirements and requirements for enabling multiple built-in self-test domains, differentiated both by frequency and functionality. The chip features a debug port that allows observation of critical chip signals at full speed.


Ibm Journal of Research and Development | 2005

Blue Gene/L compute chip: synthesis, timing, and physical design

Arthur A. Bright; Ruud A. Haring; Marc Boris Dombrowa; Martin Ohmacht; Dirk Hoenicke; Sarabjeet Singh; James A. Marcella; Robert F. Lembach; Steve M. Douskey; Matthew R. Ellavsky; Christian G. Zoellin; Alan Gara

As one of the most highly integrated system-on-a-chip application-specific integrated circuits (ASICs) to date, the Blue Gene®/L compute chip presented unique challenges that required extensions of the standard ASIC synthesis, timing, and physical design methodologies. We describe the design flow from floorplanning through synthesis and timing closure to physical design, with emphasis on the novel features of this ASIC. Among these are a process to easily inject datapath placements for speed-critical circuits or to relieve wire congestion, and a timing closure methodology that resulted in timing closure for both nominal and worst-case timing specifications. The physical design methodology featured removal of the pre-physical-design buffering to improve routability and visualization of buses, and it featured strategic seeding of buffers to close wiring and timing and end up at 90% utilization of total chip area. Robustness was enhanced by using additional input/output (I/O) and internal decoupling capacitors and by increasing I/O-to-C4 wire widths.


Applied Physics Letters | 1988

Technique for selective etching of Si with respect to Ge

Arthur A. Bright; S. S. Iyer; Steve W. Robey; S. L. Delage

A technique to selectively etch silicon with respect to germanium is described. The method relies on an observed small difference in the effects of polymeric etch‐inhibiting layers on the two materials. In a CF4/H2 plasma, the observed polymer point for Ge is 1–3% lower than for Si. This produces a narrow process window in which Si is etched while etching of Ge is suppressed. This technique has applications to etching of pure Si layers over Ge as well as Si‐Ge alloys for device applications.


Thin Solid Films | 1984

Reactive etching mechanism of tungsten silicide in CF4-O2 plasma☆

Young Hoon Lee; Mao‐Min Chen; K. Y. Ahn; Arthur A. Bright

Abstract The WSi2 etch rate contains contributions from the directional ion-enhanced etching mechanism which follows the nth power of the d.c. self-bias voltage across the plasma sheath (n = 1.7±0.5) and from the isotropic chemical etching mechanism linearly proportional to the number density of fluorine atoms in the CF4-O2 r.f. plasma. The “apparent” reaction probability of chemical etching, defined as the chemical etching rate per fluorine atom, increases with the r.f. power because of additional heating by the eddy current induced by the r.f. magnetic field. The r.f. inductive heating in the silicide film makes the temperature of the underlying polycrystalline silicon (poly-Si) film increase, resulting in a poly-Si chemical etching rate higher than that of a blanket poly-Si film. Etch rates due to the chemical etching are proportional to the silicon content in the silicide film for a given plasma condition. At a fixed silicon content (x = 2.6), a film prepared by the chemical vapor deposition technique exhibits a chemical etching rate higher by a factor of 2 than that of films prepared by either electron beam co-evaporation or co-sputter deposition.


international conference on cluster computing | 2002

Blue Gene/L, a system-on-a-chip

George S. Almasi; G.S. Almasi; D. Beece; Ralph Bellofatto; G. Bhanot; R. Bickford; M. Blumrich; Arthur A. Bright; José R. Brunheroto; Cǎlin Caşcaval; José G. Castaños; Luis Ceze; R. Coteus; S. Chatterjee; D. Chen; G. Chiu; T.M. Cipolla; Paul G. Crumley; A. Deutsch; M.B. Dombrowa; W. Donath; M. Eleftheriou; B. Fitch; Joseph Gagliano; Alan Gara; R. Germain; M.E. Giampapa; Manish Gupta; F. Gustavson; S. Hall

Summary form only given. Large powerful networks coupled to state-of-the-art processors have traditionally dominated supercomputing. As technology advances, this approach is likely to be challenged by a more cost-effective System-On-A-Chip approach, with higher levels of system integration. The scalability of applications to architectures with tens to hundreds of thousands of processors is critical to the success of this approach. Significant progress has been made in mapping numerous compute-intensive applications, many of them grand challenges, to parallel architectures. Applications hoping to efficiently execute on future supercomputers of any architecture must be coded in a manner consistent with an enormous degree of parallelism. The BG/L program is developing a peak nominal 180 TFLOPS (360 TFLOPS for some applications) supercomputer to serve a broad range of science applications. BG/L generalizes QCDOC, the first System-On-A-Chip supercomputer that is expected in 2003. BG/L consists of 65,536 nodes, and contains five integrated networks: a 3D torus, a combining tree, a Gb Ethernet network, barrier/global interrupt network and JTAG.

Researchain Logo
Decentralizing Knowledge