Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José R. Brunheroto is active.

Publication


Featured researches published by José R. Brunheroto.


Ibm Journal of Research and Development | 2015

Active Memory Cube: A processing-in-memory architecture for exascale systems

Ravi Nair; Samuel F. Antao; Carlo Bertolli; Pradip Bose; José R. Brunheroto; Tong Chen; Chen-Yong Cher; Carlos H. Andrade Costa; J. Doi; Constantinos Evangelinos; Bruce M. Fleischer; Thomas W. Fox; Diego S. Gallo; Leopold Grinberg; John A. Gunnels; Arpith C. Jacob; P. Jacob; Hans M. Jacobson; Tejas Karkhanis; Choon Young Kim; Jaime H. Moreno; John Kevin Patrick O'Brien; Martin Ohmacht; Yoonho Park; Daniel A. Prener; Bryan S. Rosenburg; Kyung Dong Ryu; Olivier Sallenave; Mauricio J. Serrano; Patrick Siegl

Many studies point to the difficulty of scaling existing computer architectures to meet the needs of an exascale system (i.e., capable of executing


Ibm Journal of Research and Development | 2005

Blue Gene/L programming and operating environment

José E. Moreira; George S. Almasi; Charles J. Archer; Ralph Bellofatto; Peter Bergner; José R. Brunheroto; Michael Brutman; José G. Castaños; Paul G. Crumley; Manish Gupta; Todd Inglett; Derek Lieber; David Limpert; Patrick McCarthy; Mark Megerian; Mark P. Mendell; Michael Mundy; Don Reed; Ramendra K. Sahoo; Alda Sanomiya; Richard Shok; Brian E. Smith; Greg Stewart

10^{18}


International Journal of Parallel Programming | 2007

The blue gene/L supercomputer: a hardware and software story

José E. Moreira; Valentina Salapura; George S. Almasi; Charles J. Archer; Ralph Bellofatto; Peter Edward Bergner; Randy Bickford; Matthias A. Blumrich; José R. Brunheroto; Arthur A. Bright; Michael Brian Brutman; José G. Castaños; Dong Chen; Paul W. Coteus; Paul G. Crumley; Sam Ellis; Thomas Eugene Engelsiepen; Alan Gara; Mark E. Giampapa; Tom Gooding; Shawn A. Hall; Ruud A. Haring; Roger L. Haskin; Philip Heidelberger; Dirk Hoenicke; Todd A. Inglett; Gerard V. Kopcsay; Derek Lieber; David Roy Limpert; Patrick Joseph McCarthy

floating-point operations per second), consuming no more than 20 MW in power, by around the year 2020. This paper outlines a new architecture, the Active Memory Cube, which reduces the energy of computation significantly by performing computation in the memory module, rather than moving data through large memory hierarchies to the processor core. The architecture leverages a commercially demonstrated 3D memory stack called the Hybrid Memory Cube, placing sophisticated computational elements on the logic layer below its stack of dynamic random-access memory (DRAM) dies. The paper also describes an Active Memory Cube tuned to the requirements of a scientific exascale system. The computational elements have a vector architecture and are capable of performing a comprehensive set of floating-point and integer instructions, predicated operations, and gather-scatter accesses across memory in the Cube. The paper outlines the software infrastructure used to develop applications and to evaluate the architecture, and describes results of experiments on application kernels, along with performance and power projections.


european conference on parallel processing | 2003

An Overview of the Blue Gene/L System Software Organization

George S. Almasi; Ralph Bellofatto; José R. Brunheroto; Calin Cascaval; José G. Castaños; Luis Ceze; Paul G. Crumley; C. Christopher Erway; Joseph Gagliano; Derek Lieber; Xavier Martorell; José E. Moreira; Alda Sanomiya; Karin Strauss

With up to 65,536 compute nodes and a peak performance of more than 360 teraflops, the Blue Gene®/L (BG/L) supercomputer represents a new level of massively parallel systems. The system software stack for BG/L creates a programming and operating environment that harnesses the raw power of this architecture with great effectiveness. The design and implementation of this environment followed three major principles: simplicity, performance, and familiarity. By specializing the services provided by each component of the system architecture, we were able to keep each one simple and leverage the BG/L hardware features to deliver high performance to applications. We also implemented standard programming interfaces and programming languages that greatly simplified the job of porting applications to BG/L. The effectiveness of our approach has been demonstrated by the operational success of several prototype and production machines, which have already been scaled to 16,384 nodes.


international conference on supercomputing | 2008

Evaluating the effect of replacing CNK with linux on the compute-nodes of blue gene/l

Edi Shmueli; George S. Almasi; José R. Brunheroto; José G. Castaños; Gabor Dozsa; Sameer Kumar; Derek Lieber

The Blue Gene/L system at the Department of Energy Lawrence Livermore National Laboratory in Livermore, California is the world’s most powerful supercomputer. It has achieved groundbreaking performance in both standard benchmarks as well as real scientific applications. In that process, it has enabled new science that simply could not be done before. Blue Gene/L was developed by a relatively small team of dedicated scientists and engineers. This article is both a description of the Blue Gene/L supercomputer as well as an account of how that system was designed, developed, and delivered. It reports on the technical characteristics of the system that made it possible to build such a powerful supercomputer. It also reports on how teams across the world worked around the clock to accomplish this milestone of high-performance computing.


Parallel Processing Letters | 2003

AN OVERVIEW OF THE BLUEGENE/L SYSTEM SOFTWARE ORGANIZATION

George S. Almasi; Ralph Bellofatto; José R. Brunheroto; Calin Cascaval; José G. Castaños; Paul G. Crumley; C. Christopher Erway; Derek Lieber; Xavier Martorell; José E. Moreira; Ramendra K. Sahoo; Alda Sanomiya; Luis Ceze; Karin Strauss

The Blue Gene/L supercomputer will use system-on-a-chip integration and a highly scalable cellular architecture. With 65,536 compute nodes, Blue Gene/L represents a new level of complexity for parallel system software, with specific challenges in the areas of scalability, maintenance and usability. In this paper we present our vision of a software architecture that faces up to these challenges, and the simulation framework that we have used for our experiments.


computing frontiers | 2015

Data access optimization in a processing-in-memory system

Zehra Sura; Arpith C. Jacob; Tong Chen; Bryan S. Rosenburg; Olivier Sallenave; Carlo Bertolli; Samuel F. Antao; José R. Brunheroto; Yoonho Park; Kevin O'Brien; Ravi Nair

The Blue Gene machines in production today run a small single-user, single-process kernel (CNK) having a limited functionality. Motivated by the desire to provide applications with a much richer operating environment, we evaluate the effect of replacing CNK with a standard Linux kernel on the compute nodes of Blue Gene/L. We show that with a relatively small amount of effort we were able to improve benchmark performance under Linux up to a level that is comparable to CNK.


international parallel and distributed processing symposium | 2003

System management in the BlueGene/L supercomputer

George S. Almasi; Leonardo R. Bachega; Ralph Bellofatto; José R. Brunheroto; Calin Cascaval; José G. Castaños; Paul G. Crumley; C. Christopher Erway; Joseph Gagliano; Derek Lieber; Pedro Mindlin; José E. Moreira; Ramendra K. Sahoo; Alda Sanomiya; Eugen Schenfeld; Richard A. Swetz; Myung M. Bae; Gregory D. Laib; Kavitha Ranganathan; Yariv Aridor; Tamar Domany; Y. Gal; Oleg Goldshmidt; Edi Shmueli

BlueGene/L is a 65,536-compute node massively parallel supercomputer, built using system-on-a-chip integration and a cellular architecture. BlueGene/L represents a major challenge for parallel system software, particularly in the areas of scalability, maintainability, and usability. In this paper, we present the organization of the BlueGene/L system software, with emphasis on the features that address those challenges. The system software was developed in parallel with the hardware, relying on an architecturally accurate simulator of the machine. We validated this approach by demonstrating a working system software stack and high performance on real parallel applications just a few weeks after first hardware availability.


european conference on parallel processing | 2003

Obtaining Hardware Performance Metrics for the BlueGene/L Supercomputer

Pedro Mindlin; José R. Brunheroto; Luiz DeRose; José E. Moreira

The Active Memory Cube (AMC) system is a novel heterogeneous computing system concept designed to provide high performance and power-efficiency across a range of applications. The AMC architecture includes general-purpose host processors and specially designed in-memory processors (processing lanes) that would be integrated in a logic layer within 3D DRAM memory. The processing lanes have large vector register files but no power-hungry caches or local memory buffers. Performance depends on how well the resulting higher effective memory latency within the AMC can be managed. In this paper, we describe a combination of programming language features, compiler techniques, operating system interfaces, and hardware design that can effectively hide memory latency for the processing lanes in an AMC system. We present experimental data to show how this approach improves the performance of a set of representative benchmarks important in high performance computing applications. As a result, we are able to achieve high performance together with power efficiency using the AMC architecture.


symposium on computer architecture and high performance computing | 2005

Data cache prefetching design space exploration for BlueGene/L supercomputer

José R. Brunheroto; Valentina Salapura; Fernando F. Redigolo; Dirk Hoenicke; Alan Gara

The BlueGene/L supercomputer will use system-on-a-chip integration and a highly scalable cellular architecture to deliver 360 teraflops of peak computing power. With 65536 compute nodes, BlueGene/L represents a new level of scalability for parallel systems. As such, it is natural for many scalability challenges to arise. In this paper, we discuss system management and control, including machine booting, software installation, user account management, system monitoring, and job execution. We address the issue of scalability by organizing the system hierarchically. The 65536 compute nodes are organized in 1024 clusters of 64 compute nodes each, called processing sets. Each processing set is under control of a 65th node, called an I/O node. The 1024 processing sets can then be managed to a great extent as a regular Linux cluster, of which there are several successful examples. Regular cluster management is complemented by BlueGene/L specific services, performed by a service node over a separate control network. Our software development and experiments have been conducted so far using an architecturally accurate simulator of BlueGene/L, and we are gearing up to test real prototypes in 2003.

Researchain Logo
Decentralizing Knowledge