Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vít Vondrák is active.

Publication


Featured researches published by Vít Vondrák.


Advances in Engineering Software | 2013

Total FETI domain decomposition method and its massively parallel implementation

Tomáš Kozubek; Vít Vondrák; M. Menšık; David Horák; Zdenek Dostál; Václav Hapla; P. Kabelıková; M. Ermák

We describe an efficient massively parallel implementation of our variant of the FETI type domain decomposition method called Total FETI with a lumped preconditioner. A special attention is paid to the discussion of several variants of parallelization of the action of the projections to the natural coarse grid and to the effective regularization of the stiffness matrices of the subdomains. Both numerical and parallel scalability of the proposed TFETI method are demonstrated on a 2D elastostatic benchmark up to 314,505,600 unknowns and 4800cores. The results are also important for implementation of scalable algorithms for the solution of nonlinear contact problems of elasticity by TFETI based domain decomposition method.


Archive | 2008

Scalable FETI Algorithms for Frictionless Contact Problems

Zdeněk Dostál; Vít Vondrák; David Horák; Charbel Farhat; Philip Avery

1 Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, and Center of Intelligent Systems and Structures, CISS Institute of Thermomechanics AVCR, 17. listopadu 15, Ostrava-Poruba, 708 33, Czech Republic. {zdenek.dostal,vit.vondrak,david.horak}@vsb.cz 2 Stanford University, Department of Mechanical Engineering and Institute for Computational and Mathematical Engineering, Stanford, CA 94305, USA. {cfarhat,pavery}@stanford.edu


computer information systems and industrial management applications | 2008

A Description of a Highly Modular System for the Emergent Flood Prediction

Ivo Vondrák; Jan Martinovič; Jan Kozusznik; Svatopluk Štolfa; Tomáš Kozubek; Petr Kubicek; Vít Vondrák; Jan Unucka

The main goal of our system is to provide the end user with information about an approaching disaster. The concept is to ensure information access to adequate data for all potential users, including citizens, local mayors, governments, and specialists, within one system. It is obvious that there is a knowledge gap between the lay user and specialist. Therefore, the system must be able to provide this information in a simple format for the less informed user while providing more complete information with computation adjustment and parameterization options to more qualified users. Important feature is the open structure and modular architecture that enables the usage of different modules. Modules can contain different functions, alternative simulations or additional features. Since the architectural structure is open, modules can be combined in any way to achieve any desired function in the system. One of many important modules is our own analytic solution to the flood waves for a small basin to our system.


european conference on modelling and simulation | 2010

Multiple Scenarios Computing In The Flood Prediction System FLOREON.

Jan Martinovič; Stepan Kuchar; Ivo Vondrák; Vít Vondrák; Boris Nir; Jan Unucka

Floods are the most frequent natural disasters affecting the Moravian-Silesian region. Therefore a system that could predict flood extents and help in the operative disaster management was requested. The FLOREON system was created to fulfil these requests. This article describes utilization of HPC (high performance computing) in running multiple hydrometeorological simulations concurrently in the FLOREON system that should predict upcoming floods and warn against them. These predictions are based on the data inputs from NWFS (numerical weather forecast systems) (e.g. ALADIN) that are then used to run the rainfall-runoff and hydrodynamic models. Preliminary results of these experiments are presented in this article.


Archive | 2016

Contact Problems with Friction

Zdeněk Dostál; Tomáš Kozubek; Vít Vondrák

Contact problems become much more complicated when a kind of friction is taken into account, even in the case that we restrict our attention to 3D problems of linear elasticity. The problems start with the formulation of friction laws, which are of phenomenological nature. The most popular friction law, the Coulomb law of friction, makes the problem intrinsically non-convex.


digital systems design | 2015

Harnessing Performance Variability: A HPC-Oriented Application Scenario

Giuseppe Massari; Simone Libutti; Antoni Portero; Radim Vavrik; Stepan Kuchar; Vít Vondrák; Luca Borghese; William Fornaciari

The technology scaling towards the 10nm of the silicon manufacturing, is going to introduce variability challenges, mainly due to the growing susceptibility to thermal hot-spots and time-dependent variations (aging) in the silicon chip. The consequences are two-fold: a) unpredictable performance, b) unreliable computing resources. The goal of the HARPA project is to enable next-generation embedded and high-performance heterogeneous many-core processors to effectively address this issues, through a cross-layer approach, involving several component of the system stack. Each component acts at different levels and time granularity. This paper focus on one of the components of the HARPA stack, the HARPA-OS, showing early results of a first integration step of the HARPA approach in a real High-Performance Computing (HPC) application scenario.


Advances in Engineering Software | 2017

Intel Xeon Phi acceleration of Hybrid Total FETI solver

Michal Merta; Lubomir Riha; Ondrej Meca; Alexandros Markopoulos; Tomas Brzobohaty; Tomáš Kozubek; Vít Vondrák

Abstract This paper describes an approach for acceleration of the Hybrid Total FETI (HTFETI) domain decomposition method using the Intel Xeon Phi coprocessors. The HTFETI method is a memory bound algorithm which uses sparse linear BLAS operations with irregular memory access pattern. The presented local Schur complement (LSC) method has regular memory access pattern, that allows the solver to fully utilize the Intel Xeon Phi fast memory bandwidth. This translates to speedup over 10.9 of the HTFETI iterative solver when solving 3 billion unknown heat transfer problem (3D Laplace equation) on almost 400 compute nodes. The comparison is between the CPU computation using sparse data structures (PARDISO sparse direct solver) and the LSC computation on Xeon Phi. In the case of the structural mechanics problem (3D linear elasticity) of size 1 billion DOFs the respective speedup is 3.4. The presented speedups are asymptotic and they are reached for problems requiring high number of iterations (e.g., ill-conditioned problems, transient problems, contact problems). For problems which can be solved with under hundred iterations the local Schur complement method is not optimal. For these cases we have implemented sparse matrix processing using PARDISO also for the Xeon Phi accelerators.


international conference on embedded computer systems architectures modeling and simulation | 2015

HARPA: Solutions for dependable performance under physically induced performance variability

Dimitrios Rodopoulos; Simone Corbetta; Giuseppe Massari; Simone Libutti; Francky Catthoor; Yiannakis Sazeides; Chrysostomos Nicopoulos; Antoni Portero; Etienne Cappe; Radim Vavrik; Vít Vondrák; Dimitrios Soudris; Federico Sassi; Agnes Fritsch; William Fornaciari

Transistor miniaturization, combined with the dawn of novel switching semiconductor structures, calls for careful examination of the variability and aging of the computer fabric. Time-zero and time-dependent phenomena need to be carefully considered so that the dependability of digital systems can be guaranteed. Already, architectures contain many mechanisms that detect and correct physically induced reliability violations. In many cases, guarantees on functional correctness come at a quantifiable performance cost. The current paper discusses the FP7-612069-HARPA project of the European Commission and its approach towards dependable performance. This project provides solutions for performance variability mitigation, under the run time presence of fabric variability/aging and built-in reliability, availability and serviceability (RAS) techniques. In this paper, we briefly present and discuss modeling and mitigation techniques developed within HARPA, covering many abstractions of digital system design: from the transistor to the application layer.


european conference on modelling and simulation | 2015

Flood Prediction Model Simulation With Heterogeneous Trade-Offs In High Performance Computing Framework.

Antonio Portero; Radim Vavrik; Stepan Kuchar; Martin Golasowski; Vít Vondrák; Simone Libutti; Giuseppe Massari; William Fornaciari

In this paper, we propose a safety-critical system with a run-time resource management that is used to operate an application for flood monitoring and prediction. This application can run with different Quality of Service (QoS) levels depending on the current hydrometeorological situation. The system operation can follow two main scenarios standard or emergency operation. The standard operation is active when no disaster occurs, but the system still executes shortterm prediction simulations and monitors the state of the river discharge and precipitation intensity. Emergency operation is active when some emergency situation is detected or predicted by the simulations. The resource allocation can either be used for decreasing power consumption and minimizing needed resources in standard operation, or for increasing the precision and decreasing response times in emergency operation. This paper shows that it is possible to describe different optimal points at design time and use them to adapt to the current quality of service requirements during run-time.


PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014) | 2015

Automatic calibration of rainfall-runoff models and its parallelization strategies

Radim Vavrik; Matyáš Theuer; Martin Golasowski; Stepan Kuchar; Michal Podhoranyi; Vít Vondrák

For successful decision making in disaster management it is necessary to have very accurate information about disaster phenomena and its potential developmentin time. Rainfall-runoff simulations are an integral part of flood warning and decision making processes. To increase their accuracy, it is crucial to periodically updatetheir parametersin a calibration process.Since calibration is very time consuming process an HPC facility is convenient tool for its speed-up. However, required speed-up can be achieved only avoiding any human-computer interaction in so-called automatic calibration.In order to compare possibilities and efficiency of the automatic calibration, three different fully automatic parallel implementationstrategies were created and tested with our in-house rainfall-runoff model.

Collaboration


Dive into the Vít Vondrák's collaboration.

Top Co-Authors

Avatar

Zdeněk Dostál

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Radim Vavrik

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Tomáš Kozubek

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Antoni Portero

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Stepan Kuchar

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Martin Golasowski

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar

Zdenek Dostál

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros Markopoulos

Technical University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge