Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Matthijs van Waveren is active.

Publication


Featured researches published by G. Matthijs van Waveren.


international workshop on openmp | 2012

SPEC OMP2012 -- an application benchmark suite for parallel systems using OpenMP

Matthias S. Müller; John Baron; William C. Brantley; Huiyu Feng; Daniel Hackenberg; Robert Henschel; Gabriele Jost; Daniel Molka; Chris Parrott; Joe Robichaux; Pavel Shelepugin; G. Matthijs van Waveren; Brian Whitney; Kalyan Kumaran

This paper describes SPEC OMP2012, a benchmark developed by the SPEC High Performance Group. It consists of 15 OpenMP parallel applications from a wide range of fields. In addition to a performance metric based on the run time of the applications the benchmark adds an optional energy metric. The accompanying run rules detail how the benchmarks are executed and the results reported. They also cover the energy measurements. The first set of results provide scalability on three different platforms.


ieee international conference on high performance computing data and analytics | 2014

SPEC ACCEL : a Standard Application Suite for Measuring Hardware Accelerator Performance

Guido Juckeland; William C. Brantley; Sunita Chandrasekaran; Barbara M. Chapman; Shuai Che; Mathew E. Colgrove; Huiyu Feng; Alexander Grund; Robert Henschel; Wen-mei W. Hwu; Huian Li; Matthias S. Müller; Wolfgang E. Nagel; Maxim Perminov; Pavel Shelepugin; Kevin Skadron; John A. Stratton; Alexey Titov; Ke Wang; G. Matthijs van Waveren; Brian Whitney; Sandra Wienke; Rengan Xu; Kalyan Kumaran

Hybrid nodes with hardware accelerators are becoming very common in systems today. Users often find it difficult to characterize and understand the performance advantage of such accelerators for their applications. The SPEC High Performance Group (HPG) has developed a set of performance metrics to evaluate the performance and power consumption of accelerators for various science applications. The new benchmark comprises two suites of applications written in OpenCL and OpenACC and measures the performance of accelerators with respect to a reference platform. The first set of published results demonstrate the viability and relevance of the new metrics in comparing accelerator performance. This paper discusses the benchmark suites and selected published results in great detail.


ieee international conference on high performance computing data and analytics | 2004

SPEC HPG benchmarks for high-performance systems

Matthias S. Müller; Kumaran Kalyanasundaram; Greg Gaertner; Wesley B. Jones; Rudolf Eigenmann; Ron Lieberman; G. Matthijs van Waveren; Brian Whitney

In this paper, we discuss the results and characteristics of the benchmark suites maintained by the Standard Performance Evaluation Corporations (SPEC) High-Performance Group (HPG). Currently, SPECHPGhas two lines of benchmark suites for measuring performance of large-scale systems SPEC OMP and SPEC HPC2002. SPEC OMP uses the OpenMP API and includes benchmark suites intended for measuring performance of modern shared memory parallel systems. SPEC HPC2002 uses both OpenMP and MPI, and thus it is suitable for distributed memory systems, shared memory systems and hybrid systems. SPEC HPC2002 contains benchmarks from three popular application areas: chemistry, seismic and weather forecasting. Each of the three benchmarks in HPC2002 has a small and a medium data set in order to satisfy the need for benchmarking a wide range of high-performance systems. We analyse published results of these benchmark suites regarding scalability. We also present current efforts of SPEC HPG to create new releases of the benchmark suites.


ieee international conference on high performance computing data and analytics | 2003

SPEC HPG Benchmarks for Large Systems

Matthias S. Müller; Kumaran Kalyanasundaram; Greg Gaertner; Wesley B. Jones; Rudolf Eigenmann; Ron Lieberman; G. Matthijs van Waveren; Brian Whitney

Performance characteristics of application programs onlarge-scale systems are often significantly different from those on smaller systems. In this paper, we discuss such characteristics of the benchmark suites maintained by SPEC’s High-Performance Group (HPG). The Standard Performance Evaluation Corporation’s (SPEC) High-Performance Group (HPG) has developed a set of benchmarks for measuring performance of large-scale systems using both OpenMP and MPI parallel programming paradigms. Currently, SPEC HPG has two lines of benchmark suites: SPEC OMP and SPEC HPC2002. SPEC OMP uses the OpenMP API and includes benchmark suites intended for measuring performance of modern shared memory parallel systems. SPEC HPC2002 is based on both OpenMP and MPI, and thus it is suitable for distributed memory systems, shared memory systems, and hybrid systems. SPEC HPC2002 contains benchmarks from three popular application areas, Chemistry, Seismic, and Weather Forecasting. Each of the three benchmarks in HPC2002 has small and medium data sets in order to satisfy the need for benchmarking a wide range of high-performance systems. We present our experiences regarding the scalability of the benchmark suites. We also analyze published results of these benchmark suites based on application program behavior and systems’ architectural features.


ieee international conference on high performance computing data and analytics | 1994

High-Performance Fortran

G. Matthijs van Waveren

The advantages of using parallel processing technology in industrial applications lie in the field of cost reduction and turnaround time improvement. For example in seismic production, the improvement in turnaround time can lead to lower costs and quicker results for clients. Also in product development where computer simulations play a significant role, for instance drug design, aircraft design and flow meter design, the use of parallel processing technology leads to a decrease in the time-to-market.


Concurrency and Computation: Practice and Experience | 2002

VPP Fortran and the design of HPF/JA extensions

Hidetoshi Iwashita; Naoki Sueyasu; Sachio Kamiya; G. Matthijs van Waveren

VPP Fortran is a data parallel language that has been designed for the VPP series of supercomputers. In addition to pure data parallelism, it contains certain low‐level features that were designed to extract high performance from user programs. A comparison of VPP Fortran and High‐Performance Fortran (HPF) 2.0 shows that these low‐level features are not available in HPF 2.0. The features include asynchronous inter‐processor communication, explicit shadow, and the LOCAL directive. They were shown in VPP Fortran to be very useful in handling real‐world applications, and they have been included in the HPF/JA extensions. They are described in the paper. The HPF/JA Language Specification Version 1.0 is an extension of HPF 2.0 to achieve practical performance for real‐world applications and is a result of collaboration in the Japan Association for HPF (JAHPF). Some practical programming and tuning procedures with the HPF/JA Language Specification are described, using the NAS Parallel Benchmark BT as an example. Copyright


ieee international conference on high performance computing data and analytics | 2009

Towards a Lightweight HPF Compiler

Hidetoshi Iwashita; Kohichiro Hotta; Sachio Kamiya; G. Matthijs van Waveren

The UXP/V HPF compiler, that has been developed for the VPP series vector-parallel supercomputers, extracts the highest performance from the hardware. However, it is getting difficult for developers to concentrate on a specific hardware. This paper describes a method of developing an HPF compiler for multiple platforms without losing performance. Advantage is taken of existing technology. The code generator and runtime system of VPP Fortran are reused for high-end computers; MPI is employed for general distributed environments, such as a PC cluster. Following a performance estimation on different systems, we discuss effectiveness of the method and open issues.


ieee international conference on high performance computing data and analytics | 2003

On the Implementation of OpenMP 2.0 Extensions in the Fujitsu PRIMEPOWER Compiler

Hidetoshi Iwashita; Masanori Kaneko; Masaki Aoki; Kohichiro Hotta; G. Matthijs van Waveren

The OpenMP Architecture Review Board has released version 2.0 of the OpenMP Fortran language specification in November 2000, and version 2.0 of the OpenMP C/C++ language specification in March 2002. This paper discusses the implementation of the OpenMP Fortran 2.0 WORKSHARE construct, NUM_THREADS clause, COPYPRIVATE clause, and array REDUCTION clause in the Parallelnavi software package. We focus on the WORKSHARE construct and discuss how we attain parallelization with loop fusion.


Concurrency and Computation: Practice and Experience | 2002

Code generator for the HPF Library and Fortran 95 transformational functions

G. Matthijs van Waveren; Cliff Addison; Peter Harrison; Dave Orange; Norman Brown; Hidetoshi Iwashita

One of the language features of the core language of HPF 2.0 (High Performance Fortran) is the HPF Library. The HPF Library consists of 55 generic functions. The implementation of this library presents the challenge that all data types, data kinds, array ranks and input distributions need to be supported. For instance, more than 2 billion separate functions are required to support COPY_SCATTER fully. The efficient support of these billions of specific functions is one of the outstanding problems of HPF. We have solved this problem by developing a library generator which utilizes the mechanism of parameterized templates. This mechanism allows the procedures to be instantiated at compile time for arguments with a specific type, kind, rank and distribution over a specific processor array. We describe the algorithms used in the different library functions. The implementation gives the ease of generating a large number of library routines from a single template. The templates can be extended with special code for specific combinations of the input arguments. We describe in detail the implementation and performance of the matrix multiplication template for the Fujitsu VPP5000 platform. Copyright


ieee international conference on high performance computing data and analytics | 1997

Application of HPCN to Direct Numerical Simulation of Turbulent Flow

Roel Verstappen; Arthur Veldman; G. Matthijs van Waveren

This poster shows how HPCN can be used as a path-finding tool for turbulence research. The parallelization of direct numerical simulation of turbulent flow using the data-parallel model and Fortran 95 constructs is treated, both on a shared memory and a distributed memory computer.

Collaboration


Dive into the G. Matthijs van Waveren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wesley B. Jones

National Renewable Energy Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kalyan Kumaran

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge