Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P. Chris Broekema is active.

Publication


Featured researches published by P. Chris Broekema.


international conference on parallel processing | 2009

Characterizing the Performance of Big Memory on Blue Gene Linux

Kazutomo Yoshii; Kamil Iskra; Harish Gapanati Naik; Pete Beckmanm; P. Chris Broekema

Efficient use of Linux for high-performance applications on Blue Gene/P (BG/P) compute nodes is challenging because of severe performance hits resulting from translation lookaside buffer (TLB) misses and a hard-to-program torus network DMA controller. To address these difficulties, we present the design and implementation of “Big Memory”— an alternative, transparent memory space for computational processes. Big Memory uses extremely large memory pages available on PowerPC CPUs to create a TLB-miss-free, flat memory area that can be used for application code and data and is easier to use for DMA operations. One of our singlenode memory benchmarks shows that the performance gap between regular PowerPC Linux with 4KB pages and IBM BG/P compute node kernel (CNK) is about 68% in the worst case. Big Memory narrows the worst case performance gap to just 0.04%. We verify this result on 1024 nodes of Blue Gene/P using the NAS Parallel Benchmarks and find the performance under Linux with Big Memory to fluctuate within 0.7% of CNK. Originally intended exclusively for compute node tasks, our new memory subsystem turns out to dramatically improve the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray (LOFAR) radio telescope as an example.


acm symposium on parallel algorithms and architectures | 2006

Astronomical real-time streaming signal processing on a Blue Gene/L supercomputer

John W. Romein; P. Chris Broekema; Ellen van Meijeren; Kjeld Van Der Schaaf; Walther H. Zwart

LOFAR is the first of a new generation of radio telescopes, that combines the signals from many thousands of simple, fixed antennas, rather than from expensive dishes. Its revolutionary design and unprecedented size enables observations in a frequency range that could hardly be observed before, and allows the study of a vast amount of new science cases.In this paper, we describe a novel approach to process realtime, streaming telescope data in software, using a supercomputer. The desire for a flexible and reconfigurable instrument demands a software solution, where traditionally customized hardware was used. This, and LOFARs exceptional real-time, streaming signalprocessing requirements compel the use of a supercomputer. We focus on the LOFAR CEntral Processing facility (CEP), that combines the signals of all LOFAR stations. CEP consists of a 12,288-core IBM Blue Gene/L supercomputer, embedded in several conventional clusters.We describe a highly optimized implementation that will do the bulk of the central signal processing on the Blue Gene/L, namely PolyPhase Filtering, Delay Compensation, and Correlation. Measurements show that we reach exceptionally high computational performance (up to 98% of the theoretical floating-point peak performance). We also discuss how we handle external I/O performance limitations into and out of the Blue Gene/L, to obtain sufficient bandwidth for LOFAR.


acm sigplan symposium on principles and practice of parallel programming | 2010

The LOFAR correlator: implementation and performance analysis

John W. Romein; P. Chris Broekema; Jan David Mol; Rob V. van Nieuwpoort

LOFAR is the first of a new generation of radio telescopes.Rather than using expensive dishes, it forms a distributed sensor network that combines the signals from many thousands of simple antennas. Its revolutionary design allows observations in a frequency range that has hardly been studied before. Another novel feature of LOFAR is the elaborate use of software to process data, where traditional telescopes use customized hardware. This dramatically increases flexibility and substantially reduces costs, but the high processing and bandwidth requirements compel the use of a supercomputer. The antenna signals are centrally combined, filtered, optionally beam-formed, and correlated by an IBM Blue Gene/P. This paper describes the implementation of the so-called correlator. To meet the real-time requirements, the application is highly optimized, and reaches exceptionally high computational and I/O efficiencies. Additionally, we study the scalability of the system, and show that it scales well beyond the requirements. The optimizations allows us to use only half the planned amount of resources, and process 50% more telescope data, significantly improving the effectiveness of the entire telescope.


ieee international conference on high performance computing data and analytics | 2011

Performance and Scalability Evaluation of 'Big Memory' on Blue Gene Linux

Kazutomo Yoshii; Kamil Iskra; Harish Gapanati Naik; Peter H. Beckman; P. Chris Broekema

We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of ‘Big Memory’——an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solely for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.


ieee international conference on high performance computing data and analytics | 2012

ExaScale high performance computing in the square kilometer array

P. Chris Broekema; Rob V. van Nieuwpoort; Henri E. Bal

Next generation radio telescopes will require tremendous amounts of compute power. With the current state of the art, the Square Kilometer Array (SKA), currently entering its pre-construction phase, will require in excess of one ExaFlop/s in order to process and reduce the massive amount of data generated by the sensors. The nature of the processing involved means that conventional high performance computing (HPC) platforms are not ideally suited. Consequently, the SKA project requires active and intensive involvement from both the high performance computing research community, as well as industry, in order to make sure a suitable system is available when the telescope is built. In this paper we present a first analysis of the processing required, and a tool that will facilitate future analysis and external involvement.


ursi general assembly and scientific symposium | 2011

Processing LOFAR telescope data in real time on a Blue Gene/P supercomputer

John W. Romein; Jan David Mol; Rob V. van Nieuwpoort; P. Chris Broekema

This paper gives an overview of the LOFAR correlator. Unlike traditional telescopes, the correlator is implemented in software, yielding a very flexible and reconfigurable instrument. The term “correlator” understates its capabilities: it filters, corrects, coherently or incoherently beam forms, dedisperses, and transforms the data as well. It supports several observation modes, even simultaneously. The high data rates and processing requirements compel the use of a supercomputer; we use a Blue Gene/P. The software is highly optimized and achieves extremely good computational performance and bandwidths, increasing the performance of the entire LOFAR telescope.


Future Generation Computer Systems | 2018

Energy-efficient data transfers in radio astronomy with software UDP RDMA

Przemyslaw Lenkiewicz; P. Chris Broekema; Bernard Metzler

Abstract Modern radio astronomy relies on very large amounts of data that need to be transferred between various parts of astronomical instruments, over distances that are often in the range of tens or hundreds of kilometres. The Square Kilometre Array (SKA) will be the world’s largest radio telescope, data rates between its components will exceed Terabits per second. This will impose a huge challenge on its data transport system, especially with regard to power consumption. High-speed data transfers using modern off-the-shelf hardware may impose a significant load on the receiving system with respect to CPU and DRAM usage. The SKA has a strict energy budget which demands a new, custom-designed data transport solution. In this paper we present SoftiWARP UDP, an unreliable datagram-based Remote Direct Memory Access (RDMA) protocol, which can significantly increase the energy-efficiency of high-speed data transfers for radio astronomy. We have implemented a fully functional software prototype of such a protocol, supporting RDMA Read and Write operations and zero-copy capabilities. We present measurements of power consumption and achieved bandwidth and investigate the behaviour of all examined protocols when subjected to packet loss.


computing frontiers | 2017

Software-defined networks in large-scale radio telescopes

P. Chris Broekema; Damiaan R. Twelker; Daniel Filipe Cabaça Romão; Paola Grosso; Rob V. van Nieuwpoort; Henri E. Bal

Traditional networks are relatively static and rely on a complex stack of interoperating protocols for proper operation. Modern large-scale science instruments, such as radio telescopes, consist of an interconnected collection of sensors generating large quantities of data, transported over high-bandwidth IP over Ethernet networks. The concept of a software-defined network (SDN) has recently gained popularity, moving control over the data flow to a programmable software component, the network controller. In this paper we explore the viability of such an SDN in sensor networks typical of future large-scale radio telescopes, such as the Square Kilometre Array (SKA). Based on experience with the LOw Frequency ARray (LOFAR), a recent radio telescope, we show that the addition of such software control adds to the reliability and flexibility of the instrument. We identify some essential technical SDN requirements for this application, and investigate the level of functional support on three current switches and a virtual software switch. A proof of concept application validates the viability of this concept. While we identify limitations in the SDN implementations and performance of two of our hardware switches, excellent performance is shown on a third.


ieee international conference on high performance computing data and analytics | 2012

DOME: towards the ASTRON & IBM center for exascale technology

P. Chris Broekema; Albert-Jan Boonstra; Victoria Caparros Cabezas; Ton Engbersen; Hanno Holties; Jens Jelitto; Ronald P. Luijten; Peter Maat; Rob V. van Nieuwpoort; Ronald Nijboer; John W. Romein; Bert Jan Offrein


arXiv: Instrumentation and Methods for Astrophysics | 2018

On optimising cost and value in eScience: case studies in radio astronomy

P. Chris Broekema; Verity L Allan; Henri E. Bal

Collaboration


Dive into the P. Chris Broekema's collaboration.

Top Co-Authors

Avatar

Henri E. Bal

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kamil Iskra

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kazutomo Yoshii

Argonne National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge