Axel Auweter
Bavarian Academy of Sciences and Humanities
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Axel Auweter.
ieee international conference on high performance computing data and analytics | 2016
Nikola Rajovic; Alejandro Rico; F. Mantovani; Daniel Ruiz; Josep Oriol Vilarrubi; Constantino Gómez; Luna Backes; Diego Nieto; Harald Servat; Xavier Martorell; Jesús Labarta; Eduard Ayguadé; Chris Adeniyi-Jones; Said Derradji; Hervé Gloaguen; Piero Lanucara; Nico Sanna; Jean-François Méhaut; Kevin Pouget; Brice Videau; Eric Boyer; Momme Allalen; Axel Auweter; David Brayford; Daniele Tafani; Volker Weinberg; Dirk Brömmel; Rene Halver; Jan H. Meinke; Ramón Beivide
High-performance computing (HPC) is recognized as one of the pillars for further progress in science, industry, medicine, and education. Current HPC systems are being developed to overcome emerging architectural challenges in order to reach Exascale level of performance, projected for the year 2020. The much larger embedded and mobile market allows for rapid development of intellectual property (IP) blocks and provides more flexibility in designing an application-specific system-on-chip (SoC), in turn providing the possibility in balancing performance, energy-efficiency, and cost. In the Mont-Blanc project, we advocate for HPC systems being built from such commodity IP blocks, currently used in embedded and mobile SoCs. As a first demonstrator of such an approach, we present the Mont-Blanc prototype; the first HPC system built with commodity SoCs, memories, and network interface cards (NICs) from the embedded and mobile domain, and off-the-shelf HPC networking, storage, cooling, and integration solutions. We present the systems architecture and evaluate both performance and energy efficiency. Further, we compare the systems abilities against a production level supercomputer. At the end, we discuss parallel scalability and estimate the maximum scalability point of this approach across a set of applications.
high performance computing systems and applications | 2014
Torsten Wilde; Axel Auweter; Michael K. Patterson; Hayk Shoukourian; Herbert Huber; Arndt Bode; Detlef Labrenz; Carlo Cavazzoni
To determine whether a High-Performance Computing (HPC) data center is energy efficient, various aspects have to be taken into account: the data centers power distribution and cooling infrastructure, the HPC system itself, the influence of the system management software, and the HPC workloads; all can contribute to the overall energy efficiency of the data center. Currently, two well-established metrics are used to determine energy efficiency for HPC data centers and systems: Power Usage Effectiveness (PUE) and FLOPS per Watt (as defined by the Green500 in their ranking list). PUE evaluates the overhead for running a data center and FLOPS per Watt characterizes the energy efficiency of a system running the High-Performance Linpack (HPL) benchmark, i.e. floating point operations per second achieved with 1 watt of electrical power. Unfortunately, under closer examination even the combination of both metrics does not characterize the overall energy efficiency of a HPC data center. First, HPL does not constitute a representative workload for most of todays HPC applications and the rev 0.9 Green500 run rules for power measurements allows for excluding subsystems (e.g. networking, storage, cooling). Second, even a combination of PUE with FLOPS per Watt metric neglects that the total energy efficiency of a system can vary with the characteristics of the data center in which it is operated. This is due to different cooling technologies implemented in HPC systems and the difference in costs incurred by the data center removing the heat using these technologies. To address these issues, this paper introduces the metrics system PUE (sPUE) and Data center Workload Power Efficiency (DWPE). sPUE calculates the overhead for operating a given system in a certain data center. DWPE is then calculated by determining the energy efficiency of a specific workload and dividing it by the sPUE. DWPE can then be used to define the energy efficiency of running a given workload on a specific HPC system in a specific data center and is currently the only fully-integrated metric suitable for rating an HPC data centers energy efficiency. In addition, DWPE allows for predicting the energy efficiency of different HPC systems in existing HPC data centers, thus making it an ideal approach for guiding HPC system procurement. This paper concludes with a demonstration of the application of DWPE using a set of representative HPC workloads.
international conference on high performance computing and simulation | 2015
Hayk Shoukourian; Torsten Wilde; Axel Auweter; Arndt Bode
Efficient scheduling is crucial for time and cost-effective utilization of compute resources especially for high end systems. A variety of factors need to be considered during the scheduling decisions. Power variation across the compute resources of homogeneous large-scale systems has not been considered so far. This paper discusses the impact of the power variation for parallel application scheduling. It addresses the problem of finding the optimal resource configuration for a given application that will minimize the amount of consumed energy, under pre-defined constraints on application execution time and instantaneous average power consumption. This paper presents an efficient algorithm to do so, which also considers the existing power diversity among the compute nodes (modified also at different operating CPU frequencies) of a given homogeneous High Performance Computing system. Based on this algorithm, the paper presents a plug-in, referred to as Configuration Adviser, which operates on top of a given resource management and scheduling system to advise on energy-wise optimal resource configuration for a given application, execution using which, will adhere to the specified execution time and power consumption constraints. The main goal of this plug-in is to enhance the current resource management and scheduling tools for the support of power capping for future Exascale systems, where a data center might not be able to provide cooling or electrical power for system peak consumption but only for the expected power bands.
information and communication on technology for the fight against global warming | 2011
Axel Auweter; Arndt Bode; Matthias Brehm; Herbert Huber; Dieter Kranzlmüller
High Performance Computing (HPC) is a key technology for modern researchers enabling scientific advances through simulation where experiments are either technically impossible or financially not feasible to conduct and theory is not applicable. However, the high degree of computational power available from todays supercomputers comes at the cost of large quantities of electrical energy being consumed. This paper aims to give an overview of the current state of the art and future techniques to reduce the overall power consumption of HPC systems and sites. We believe that a holistic approach for monitoring and operation at all levels of a supercomputing site is necessary. Thus, we do not only concentrate on the possibility of improving the energy efficiency of the compute hardware itself, but also of site infrastructure components for power distribution and cooling. Since most of the energy consumed by supercomputers is converted into heat, we also outline possible technologies to re-use waste heat in order to increase the Power Usage Effectiveness (PUE) of the entire supercomputing site.
ieee international conference on high performance computing, data, and analytics | 2015
Torsten Wilde; Axel Auweter; Hayk Shoukourian; Arndt Bode
Saving energy and, therefore, reducing the Total Cost of Ownership (TCO) for High Performance Computing (HPC) data centers has increasingly generated attention in light of rising energy costs and the technical hurdles imposed when powering multi-MW data centers. The broadest impact on data center energy efficiency can be achieved by techniques that do not require application specific tuning. Improving the Power Usage Effectiveness (PUE), for example, benefits everything that happens in a data center. Less broad but still better than individual application tuning would be to improve the energy efficiency of the HPC system itself. One property of homogeneous HPC systems that hasn’t been considered so far is the existence of node power variation.
2017 33rd Thermal Measurement, Modeling & Management Symposium (SEMI-THERM) | 2017
Torsten Wilde; Michael Ott; Axel Auweter; Ingmar Meijer; Patrick Ruch; Markus Hilger; Steffen Kuhnert; Herbert Huber
In High Performance Computing (HPC), chiller-less cooling has replaced mechanical chiller supported cooling for a significant part of the HPC system resulting in lower cooling costs. Still, other IT components and IT systems remain that require air or cold water cooling. This work introduces CooLMUC-2, a high-temperature direct-liquid cooled (HT-DLC) HPC system which uses a heat-recovery scheme to drive an adsorption refrigeration process. Using an adsorption chiller is at least two times more efficient than a mechanical chiller for producing needed cold water. To this date this is the only installation of adsorption chillers in a data center combining a Top500 production level HPC system with adsorption refrigeration. This prototype installation is one more step towards a 100% mechanical chiller-free data center. After optimization of the operational parameters of the system, the adsorption chillers of CooLMUC-2 consume just over 6kW of electrical power to not only remove 95kW of heat from the supercomputer, but also to produce more than 50kW of cold water. This paper presents initial measurements characterizing the heat-recovery performance of CooLMUC-2 at different operating conditions.
ieee international conference on high performance computing data and analytics | 2012
Herbert Huber; Axel Auweter; Torsten Wilde; Ingmar Meijer; Charles J. Archer; Torsten Bloth; Achim Bomelburg; Steffen Waitz
This presentation explores energy management, liquid cooling and heat re-use as well as contract specialities for LRZ: Leibniz-Rechenzentrum.
Archive | 2012
Axel Auweter; Dieter Kranzlmüller; Amirreza Tahamtan; A Min Tjoa
Two IT-system control methods, which realize efficient battery usage for battery-powered IT systems targeting developing countries such as India, are proposed. The proposed methods control the IT equipment and cooling power collaboratively on the basis of a forecast of power outage duration. To quantitatively evaluate these methods, power outages in Bangalore, India, were measured. The proposed methods were evaluated by using this measured power outage data. According to the evaluation results, the proposed methods can improve a measure of battery efficiency, namely, IT-used-energy (Qt)/battery-used-energy (Qu), by 39% compared to that of conventional IT systems.
international conference on supercomputing | 2014
Axel Auweter; Arndt Bode; Matthias Brehm; Luigi Brochard; Nicolay Hammer; Herbert Huber; Raj Panda; Francois Thomas; Torsten Wilde
Computer Science - Research and Development | 2014
Torsten Wilde; Axel Auweter; Hayk Shoukourian