Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radim Vavrik is active.

Publication


Featured researches published by Radim Vavrik.


digital systems design | 2015

Harnessing Performance Variability: A HPC-Oriented Application Scenario

Giuseppe Massari; Simone Libutti; Antoni Portero; Radim Vavrik; Stepan Kuchar; Vít Vondrák; Luca Borghese; William Fornaciari

The technology scaling towards the 10nm of the silicon manufacturing, is going to introduce variability challenges, mainly due to the growing susceptibility to thermal hot-spots and time-dependent variations (aging) in the silicon chip. The consequences are two-fold: a) unpredictable performance, b) unreliable computing resources. The goal of the HARPA project is to enable next-generation embedded and high-performance heterogeneous many-core processors to effectively address this issues, through a cross-layer approach, involving several component of the system stack. Each component acts at different levels and time granularity. This paper focus on one of the components of the HARPA stack, the HARPA-OS, showing early results of a first integration step of the HARPA approach in a real High-Performance Computing (HPC) application scenario.


international conference on embedded computer systems architectures modeling and simulation | 2015

HARPA: Solutions for dependable performance under physically induced performance variability

Dimitrios Rodopoulos; Simone Corbetta; Giuseppe Massari; Simone Libutti; Francky Catthoor; Yiannakis Sazeides; Chrysostomos Nicopoulos; Antoni Portero; Etienne Cappe; Radim Vavrik; Vít Vondrák; Dimitrios Soudris; Federico Sassi; Agnes Fritsch; William Fornaciari

Transistor miniaturization, combined with the dawn of novel switching semiconductor structures, calls for careful examination of the variability and aging of the computer fabric. Time-zero and time-dependent phenomena need to be carefully considered so that the dependability of digital systems can be guaranteed. Already, architectures contain many mechanisms that detect and correct physically induced reliability violations. In many cases, guarantees on functional correctness come at a quantifiable performance cost. The current paper discusses the FP7-612069-HARPA project of the European Commission and its approach towards dependable performance. This project provides solutions for performance variability mitigation, under the run time presence of fabric variability/aging and built-in reliability, availability and serviceability (RAS) techniques. In this paper, we briefly present and discuss modeling and mitigation techniques developed within HARPA, covering many abstractions of digital system design: from the transistor to the application layer.


european conference on modelling and simulation | 2015

Flood Prediction Model Simulation With Heterogeneous Trade-Offs In High Performance Computing Framework.

Antonio Portero; Radim Vavrik; Stepan Kuchar; Martin Golasowski; Vít Vondrák; Simone Libutti; Giuseppe Massari; William Fornaciari

In this paper, we propose a safety-critical system with a run-time resource management that is used to operate an application for flood monitoring and prediction. This application can run with different Quality of Service (QoS) levels depending on the current hydrometeorological situation. The system operation can follow two main scenarios standard or emergency operation. The standard operation is active when no disaster occurs, but the system still executes shortterm prediction simulations and monitors the state of the river discharge and precipitation intensity. Emergency operation is active when some emergency situation is detected or predicted by the simulations. The resource allocation can either be used for decreasing power consumption and minimizing needed resources in standard operation, or for increasing the precision and decreasing response times in emergency operation. This paper shows that it is possible to describe different optimal points at design time and use them to adapt to the current quality of service requirements during run-time.


PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014) | 2015

Automatic calibration of rainfall-runoff models and its parallelization strategies

Radim Vavrik; Matyáš Theuer; Martin Golasowski; Stepan Kuchar; Michal Podhoranyi; Vít Vondrák

For successful decision making in disaster management it is necessary to have very accurate information about disaster phenomena and its potential developmentin time. Rainfall-runoff simulations are an integral part of flood warning and decision making processes. To increase their accuracy, it is crucial to periodically updatetheir parametersin a calibration process.Since calibration is very time consuming process an HPC facility is convenient tool for its speed-up. However, required speed-up can be achieved only avoiding any human-computer interaction in so-called automatic calibration.In order to compare possibilities and efficiency of the automatic calibration, three different fully automatic parallel implementationstrategies were created and tested with our in-house rainfall-runoff model.


IOP Conference Series: Earth and Environmental Science | 2016

Dynamic computing resource allocation in online flood monitoring and prediction

Stepan Kuchar; Michal Podhoranyi; Radim Vavrik; Antoni Portero

This paper presents tools and methodologies for dynamic allocation of high performance computing resources during operation of the Floreon+ online flood monitoring and prediction system. The resource allocation is done throughout the execution of supported simulations to meet the required service quality levels for system operation. It also ensures flexible reactions to changing weather and flood situations, as it is not economically feasible to operate online flood monitoring systems in the full performance mode during non-flood seasons. Different service quality levels are therefore described for different flooding scenarios, and the runtime manager controls them by allocating only minimal resources currently expected to meet the deadlines. Finally, an experiment covering all presented aspects of computing resource allocation in rainfall-runoff and Monte Carlo uncertainty simulation is performed for the area of the Moravian-Silesian region in the Czech Republic.


international green and sustainable computing conference | 2016

Using an adaptive and time predictable runtime system for power-aware HPC-oriented applications

Antoni Portero; Jiri Sevcik; Martin Golasowski; Radim Vavrik; Simone Libutti; Giuseppe Massari; Francky Catthoor; William Fornaciari; Vít Vondrák

An increasing number of High-Performance Applications demand some form of time predictability, in particular in scenarios where correctness depends on both performance and timing requirements, and the failure to meet either of them is critical. Consequently, a more predictable HPC system is required, particularly for an emerging class of adaptive real-time HPC applications. Here we present our runtime approach which produces the results in the predictable time with the minimized allocation of hardware resources. The paper describes the advantages in terms of execution time reliability and the trade-offs regarding power/energy consumption and temperature of the system compared with the current GNU/Linux governors.


PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014) | 2015

Framework for scheduling and resource management in time-constrained HPC application

Antoni Portero; Stepan Kuchar; Radim Vavrik; Martin Golasowski; Giuseppe Massari; William Fornaciari; Vít Vondrák

The silicon technology continues reducing scale following the Moore’s law. Device variability increases due to a lost in controllability during silicon chip fabrication. The current methodologies based on error detection and thread re-execution (roll back) cannot be enough, when the number of errors increase and arrive to a threshold. This dynamic scenario can be very negative if we are executing programs in HPC systems where a correct, accurate and time constraints solution is expected. The objective of the paper is to show preliminary results of Barbeque OpenSource Project (BOSP) and its potential use in HPC systems.


Archive | 2019

Floreon+ Modules: A Real-World HARPA Application in the High-End HPC System Domain

Antoni Portero; Radim Vavrik; Martin Golasowski; Jiri Sevcik; Giuseppe Massari; Simone Libutti; William Fornaciari; Stepan Kuchar; Vít Vondrák

This chapter is centered around uncertainty computation with on-demand resource allocation for run-off prediction in a High-Performance Computer environment. Our research stands on a runtime operating system that automatically adapts resource allocation with the computation to provide precise outcomes before the time deadline. In our case, input data comes from several gauging stations, and when newly updated data arrives, models must be re-executed to provide accurate results immediately. Since the models run continuously (24/7), their computational demand is different during various hydrological events (e.g. periods with heavy rain and without any rain) and therefore computational resources have to be balanced according to the event severity. Although these kinds of models should run constantly, they are very computationally demanding during discrete periods of time, for example in the case of heavy rain. Then, the accuracy of the results must be as close as possible to reality. The work relies on the HARPA runtime resource manager that adapts resource allocation to the runtime-variable performance demand of applications. The resource assignment is temperature-aware: the application execution is dynamically migrated to the coolest cores, and this has a positive impact on the system reliability.


Archive | 2019

Evaluating System-Level Monitors and Knobs on Real Hardware

Panagiota Nikolaou; Zacharias Hadjilambrou; Panayiotis Englezakis; Lorena Ndreu; Chrysostomos Nicopoulos; Yiannakis Sazeides; Antoni Portero; Radim Vavrik; Vít Vondrák

This chapter evaluates and defines a methodology for the oracle selection of the monitors and knobs to use to configure an HPC system running a scientific application while satisfying the application’s requirements and not violating any system constraints. This methodology relies on a heuristic correlation analysis between requirements, monitors, and knobs to determine the minimum subset of monitors to observe and knobs to explore to determine the optimal system configuration for the HPC application. The setup under examination is the Floreon+ application, along with IT4I’s cluster. At the end of this analysis, the eleven-dimensional was reduced to a three-dimensional space for monitors and a six-dimensional space to a three-dimensional space for knobs. This reduction shows the potential and highlights the need for a realistic methodology to help identify such a minimum set of monitors and knobs. Additionally, a characterization of the application is provided by showing that Floreon+ performance requirements are satisfied with a CPU frequency lower than the servers nominal CPU frequency.


Archive | 2019

The HARPA Approach to Ensure Dependable Performance

Nikolaos Zompakis; Michail Noltsis; Panagiota Nikolaou; Panayiotis Englezakis; Zacharias Hadjilambrou; Lorena Ndreu; Giuseppe Massari; Simone Libutti; Antoni Portero; Federico Sassi; Alessandro Bacchini; Chrysostomos Nicopoulos; Yiannakis Sazeides; Radim Vavrik; Martin Golasowski; Jiri Sevcik; Stepan Kuchar; Vít Vondrák; Fritsch Agnes; Hans Cappelle; Francky Catthoor; William Fornaciari; Dimitrios Soudris

The goal of the HARPA solution is to overcome the performance variability (PV) by enabling next-generation embedded and high-performance platforms using heterogeneous many-core processors to provide cost-effectively dependable performance: the correct functionality and (where needed) timing guarantees throughout the expected lifetime of a platform. This must be accomplished in the presence of cycle-by-cycle performance variability due to time-dependent variations in silicon devices and wires under thermal, power, and energy constraints. The common challenge for both embedded and high-performance systems is to harness the unsustainable increases in design and operational margins and yet provide dependable performance. For example, resources that are statically determined based on worst-case execution time for real-time applications or lower clock frequency to satisfy excessive timing margins in high-performance processors.

Collaboration


Dive into the Radim Vavrik's collaboration.

Top Co-Authors

Avatar

Vít Vondrák

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Antoni Portero

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Martin Golasowski

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Stepan Kuchar

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiri Sevcik

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Podhoranyi

Technical University of Ostrava

View shared research outputs
Researchain Logo
Decentralizing Knowledge