Vladimir Voevodin
Moscow State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vladimir Voevodin.
Biotechnology Journal | 2015
Dmitry Suplatov; Vladimir Voevodin; Vytas K. Švedas
The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic‐driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots – key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge‐based predictions for experimental evaluation.
ieee international conference on high performance computing data and analytics | 2013
Bernd Mohr; Vladimir Voevodin; Judit Gimenez; Erik Hagersten; Andreas Knüpfer; Dmitry A. Nikitenko; Mats Nilsson; Harald Servat; Aamer Shah; Frank Winkler; Felix Wolf; Ilya Zhukov
To maximise the scientific output of a high-performance computing system, different stakeholders pursue different strategies. While individual application developers are trying to shorten the time to solution by optimising their codes, system administrators are tuning the configuration of the overall system to increase its throughput. Yet, the complexity of today’s machines with their strong interrelationship between application and system performance presents serious challenges to achieving these goals. The HOPSA project (HOlistic Performance System Analysis) therefore sets out to create an integrated diagnostic infrastructure for combined application and system-level tuning – with the former provided by the EU and the latter by the Russian project partners. Starting from system-wide basic performance screening of individual jobs, an automated workflow routes findings on potential bottlenecks either to application developers or system administrators with recommendations on how to identify their root cause using more powerful diagnostic tools. Developers can choose from a variety of mature performance-analysis tools developed by our consortium. Within this project, the tools will be further integrated and enhanced with respect to scalability, depth of analysis, and support for asynchronous tasking, a node-level paradigm playing an increasingly important role in hybrid programs on emerging hierarchical and heterogeneous systems.
Advances in Bioinformatics | 2015
Igor V. Oferkin; Ekaterina V. Katkova; Alexey V. Sulimov; Danil C. Kutov; Sergey Sobolev; Vladimir Voevodin; Vladimir B. Sulimov
The adequate choice of the docking target function impacts the accuracy of the ligand positioning as well as the accuracy of the protein-ligand binding energy calculation. To evaluate a docking target function we compared positions of its minima with the experimentally known pose of the ligand in the protein active site. We evaluated five docking target functions based on either the MMFF94 force field or the PM7 quantum-chemical method with or without implicit solvent models: PCM, COSMO, and SGB. Each function was tested on the same set of 16 protein-ligand complexes. For exhaustive low-energy minima search the novel MPI parallelized docking program FLM and large supercomputer resources were used. Protein-ligand binding energies calculated using low-energy minima were compared with experimental values. It was demonstrated that the docking target function on the base of the MMFF94 force field in vacuo can be used for discovery of native or near native ligand positions by finding the low-energy local minima spectrum of the target function. The importance of solute-solvent interaction for the correct ligand positioning is demonstrated. It is shown that docking accuracy can be improved by replacement of the MMFF94 force field by the new semiempirical quantum-chemical PM7 method.
parallel, distributed and network-based processing | 2016
Alexander Antonov; Vadim Voevodin; Vladimir Voevodin; Alexey Teplov
The AlgoWiki open encyclopedia of parallel algorithmic features enables the entire computing community to work together to describe the properties of a multitude of mathematical algorithms and their implementation for various software and hardware platforms. As part of the AlgoWiki project, a structure has been suggested for providing universal descriptions of algorithm properties. Along with the first part of the description, dedicated to machine-independent properties of the algorithms, it is extremely important to study and describe the dynamic characteristics of their software implementation. By studying fundamental algorithm properties such as execution time, performance, data locality, efficiency and scalability, we can give some estimate of the potential implementation quality for a given algorithm on a specific computer and lay the foundation for comparative analysis of various computing platforms with regards to the algorithms presented in AlgoWiki.
international conference on supercomputing | 2015
Vladimir Voevodin; Vadim Voevodin
The efficient usage of all opportunities offered by modern computing systems represents a global challenge. To solve it efficiently we need to move in two directions simultaneously. Firstly, the higher educational system must be changed with a wide adoption of parallel computing technologies as the main idea across all curricula and courses. Secondly, it is necessary to develop software tools and systems to be able to reveal root causes of poor performance for applications as well as to evaluate efficiency of supercomputer centers on a large task flow. We try to combine both these two directions within supercomputer center of Moscow State University. In this article we will focus on the main idea of wide dissemination of supercomputing education for efficient usage of supercomputer systems today and in the nearest future as well as describe the results we have reached so far in this area.
international conference on cluster computing | 2013
Aamer Shah; Felix Wolf; Sergey Zhumatiy; Vladimir Voevodin
Cluster systems usually run several applications-often from different users-concurrently, with individual applications competing for access to shared resources such as the file system or the network. Low application performance is therefore not always the result of inefficient program design, but may instead be caused by interference from outside. However, knowing the difference is essential for an appropriate response. Unfortunately, traditional performance-analysis techniques consider an application always in isolation, without the ability to compare its performance to the overall performance conditions on the system when it was executed. In this paper, we present a novel approach of how to correlate the performance behavior of applications running side by side. To accomplish this, we divide the application runtime into fine-grained time slices whose boundaries are synchronized across the entire system. Mapping performance data related to shared resources onto these time slices, we are able to establish the simultaneity of their usage across jobs, which can be indicative of inter-application interference. Our experiments show that such interference effects, for which the developer is usually not to blame, can degrade application performance significantly.
Bioinformatics | 2018
Dmitry Suplatov; Kirill Kopylov; Nina N. Popova; Vladimir Voevodin; Vytas K. Švedas
Motivation Comparative analysis of homologous proteins in a functionally diverse superfamily is a valuable tool at studying structure-function relationship, but represents a methodological challenge. Results The Mustguseal web-server can automatically build large structure-guided sequence alignments of functionally diverse protein families that include thousands of proteins basing on all available information about their structures and sequences in public databases. Superimposition of protein structures is implemented to compare evolutionarily distant relatives, whereas alignment of sequences is used to compare close homologues. The final alignment can be downloaded for a local use or operated on-line with the built-in interactive tools and further submitted to the integrated sister web-servers of Mustguseal to analyze conserved, subfamily-specific and co-evolving residues at studying a protein function and regulation, designing improved enzyme variants for practical applications and selective ligands to modulate functional properties of proteins. Availability and implementation Freely available on the web at https://biokinet.belozersky.msu.ru/mustguseal. Contact [email protected]. Supplementary information Supplementary data are available at Bioinformatics online.
computing frontiers | 2016
Dmitry A. Nikitenko; Vladimir Voevodin; Sergey Zhumatiy
Managing and administering of large-scale HPC centers is a complicated problem. Using a number of independent tools for resolving its seemingly independent sub problems can become a bottleneck with rapidly increasing scale of systems, number of hardware and software components, variety of user applications and types of licenses, number of users and workgroups, and so on. The developed tool is designed to help resolving routine problems in mastering and administering of any supercomputer center from a scale of a stand-alone system up to the top-rank supercomputer centers that include a number of totally different HPC systems. The toolkit implements a flexibly configurable variety of essential tools in a single interface. It also features useful means of automation for typical administering and management multi-step procedures. Another important design and implementation feature allows installing and using the toolkit without any significant changes to existing administrating tools and system software. The developed tool is not integrated with target machines system software, it is run on a remote server and runs scripts on HPC systems via SSH as a dedicated user with limited access permissions to perform certain actions. This reduces possibility of security issues greatly and takes care of many fault tolerance issues that are in the line of the key challenges on the road to the Exascale. At the same time this allows administrator performing any operations with corresponding to the situation tools, whether using our tools or any other available tool. The approbation of the developed system proved its practicality in HPC center with some Petaflop-level supercomputers, thousands of active researchers from a diversity of institutions within several hundreds of applied projects.
NUMERICAL COMPUTATIONS: THEORY AND ALGORITHMS (NUMTA–2016): Proceedings of the 2nd International Conference “Numerical Computations: Theory and Algorithms” | 2016
Vadim Voevodin; Vladimir Voevodin; Denis Shaikhislamov; Dmitry A. Nikitenko
The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer’s task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes – normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the “Lomonosov” supercomputer.
european conference on parallel processing | 2015
Vladimir Voevodin; Victor Gergel; Nina Popova
The scale of changes in the computing world dictates the need to introduce comparably major changes in the education system. Knowledge and skills with a strong foundation in parallelism concepts are becoming key qualities for any modern specialist. The situation cannot be changed by introducing just one training course; progress needs to be achieved in many direction at once, by developing supercomputing education infrastructure. This article is dedicated to presenting work performed in Russia to develop parallel computing, high-performance computing and distributed computing education as well as results which were obtained and lessons we have learnt from the national “Supercomputing Education” project.