Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalin Kanov is active.

Publication


Featured researches published by Kalin Kanov.


Nature | 2013

Flux-freezing breakdown in high-conductivity magnetohydrodynamic turbulence

Gregory L. Eyink; Ethan T. Vishniac; Cristian Constantin Lalescu; Hussein Aluie; Kalin Kanov; Kai Bürger; Randal C. Burns; Charles Meneveau; Alexander S. Szalay

The idea of ‘frozen-in’ magnetic field lines for ideal plasmas is useful to explain diverse astrophysical phenomena, for example the shedding of excess angular momentum from protostars by twisting of field lines frozen into the interstellar medium. Frozen-in field lines, however, preclude the rapid changes in magnetic topology observed at high conductivities, as in solar flares. Microphysical plasma processes are a proposed explanation of the observed high rates, but it is an open question whether such processes can rapidly reconnect astrophysical flux structures much greater in extent than several thousand ion gyroradii. An alternative explanation is that turbulent Richardson advection brings field lines implosively together from distances far apart to separations of the order of gyroradii. Here we report an analysis of a simulation of magnetohydrodynamic turbulence at high conductivity that exhibits Richardson dispersion. This effect of advection in rough velocity fields, which appear non-differentiable in space, leads to line motions that are completely indeterministic or ‘spontaneously stochastic’, as predicted in analytical studies. The turbulent breakdown of standard flux freezing at scales greater than the ion gyroradius can explain fast reconnection of very large-scale flux structures, both observed (solar flares and coronal mass ejections) and predicted (the inner heliosheath, accretion disks, γ-ray bursts and so on). For laminar plasma flows with smooth velocity fields or for low turbulence intensity, stochastic flux freezing reduces to the usual frozen-in condition.


Journal of Turbulence | 2012

Studying Lagrangian dynamics of turbulence using on-demand fluid particle tracking in a public turbulence database

Huidan Yu; Kalin Kanov; Eric S. Perlman; Jason Graham; Edo Frederix; Randal C. Burns; Alexander S. Szalay; Gregory L. Eyink; Charles Meneveau

from a pseudo-spectral direct numerical simulation (DNS) of forced isotropic turbulence. The flow’s Taylor-scale Reynolds number is Re� = 443, and the simulation output spans about one large-scale eddy turnover time. Besides the stored velocity and pressure fields, built-in 1st- and 2nd-order space differentiation as well as spatial and temporal interpolations are implemented on the database. The resulting 27 terabytes (TB) of data are public and can be accessed remotely through an interface based on a modern Web-services model. Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls (getFunctions) requesting desired variables (velocity and pressure and their gradients) over the network. The architecture of the database and the initial builtin functionalities are described in a previous JoT paper [2]. In the present paper, further developments of the database system are described; mainly the newly developed getPosition function. Given an initial position, integration time-step, as well as an initial and end time, the getPosition function tracks arrays of fluid particles and returns particle locations at the end of the trajectory integration time. The getPosition function is tested by comparing with trajectories computed outside of the database. It is then applied to study Lagrangian velocity structure functions as well as tensor-based Lagrangian time correlation functions. The roles of pressure Hessian and viscous terms in the evolution of the symmetric and antisymmetric parts of the velocity gradient tensor are explored by comparing the time correlations with and without these terms. Besides the getPosition function, several other updates to the database are described such as a function to access the forcing term in the DNS, a new more efficient interpolation algorithm based on partial sums, and a new Matlab interface.


Journal of Turbulence | 2016

A Web services accessible database of turbulent channel flow and its use for testing a new integral wall model for LES

Jason Graham; Kalin Kanov; Xiang Yang; Myoungkyu Lee; Nicholas Malaya; Cristian Constantin Lalescu; Randal C. Burns; Gregory L. Eyink; Alexander S. Szalay; Robert D. Moser; Charles Meneveau

abstract The output from a direct numerical simulation (DNS) of turbulent channel flow at Reτ ≈ 1000 is used to construct a publicly and Web services accessible, spatio-temporal database for this flow. The simulated channel has a size of 8πh × 2h × 3πh, where h is the channel half-height. Data are stored at 2048 × 512 × 1536 spatial grid points for a total of 4000 time samples every 5 time steps of the DNS. These cover an entire channel flow-through time, i.e. the time it takes to traverse the entire channel length 8πh at the mean velocity of the bulk flow. Users can access the database through an interface that is based on the Web services model and perform numerical experiments on the slightly over 100 terabytes (TB) DNS data on their remote platforms, such as laptops or local desktops. Additional technical details about the pressure calculation, database interpolation, and differentiation tools are provided in several appendices. As a sample application of the channel flow database, we use it to conduct an a-priori test of a recently introduced integral wall model for large eddy simulation of wall-bounded turbulent flow. The results are compared with those of the equilibrium wall model, showing the strengths of the integral wall model as compared to the equilibrium model.


network aware data management | 2011

An architecture for a data-intensive computer

Edward Givelberg; Alexander S. Szalay; Kalin Kanov; Randal C. Burns

Scientific instruments, as well as simulations, generate increasingly large datasets, changing the way we do science. We propose a system that we call the data-intensive computer for computing with Petascale-sized datasets. The data-intensive computer consists of an HPC cluster, a massively parallel database and a set of computing servers running the data-intensive operating system, which turns the database into a layer in the memory hierarchy of the data-intensive computer. The data-intensive operating system is data-object-oriented: the abstract programming model of a sequential file, central to traditional computer operating systems, is replaced with system-level support for high-level data objects, such as multi-dimensional arrays, graphs, sparse arrays, etc. User application programs will be compiled into code that is executed both on the HPC cluster and inside the database. The data-intensive operating system is however non-local, allowing remote applications to execute code inside the database. This model supports the collaborative environment, where a large data set is typically created and processed by a large group of users. We are developing a software library, MPI-DB, which is a prototype of the data-intensive operating system. It is currently being used by the Turbulence group at JHU to store simulation output in the database and to perform simulations refining previously stored results.


ieee international conference on high performance computing data and analytics | 2012

Data-intensive spatial filtering in large numerical simulation datasets

Kalin Kanov; Randal C. Burns; Gregory L. Eyink; Charles Meneveau; Alexander S. Szalay

We present a query processing framework for the efficient evaluation of spatial filters on large numerical simulation datasets stored in a data-intensive cluster. Previously, filtering of large numerical simulations stored in scientific databases has been impractical owing to the immense data requirements. Rather, filtering is done during simulation or by loading snapshots into the aggregate memory of an HPC cluster. Our system performs filtering within the database and supports large filter widths. We present two complementary methods of execution: I/O streaming computes a batch filter query in a single sequential pass using incremental evaluation of decomposable kernels, summed volumes generates an intermediate data set and evaluates each filtered value by accessing only eight points in this dataset. We dynamically choose between these methods depending upon workload characteristics. The system allows us to perform filters against large data sets with little overhead: query performance scales with the clusters aggregate I/O throughput.


EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface | 2011

MPI-DB, a parallel database services software library for scientific computing

Edward Givelberg; Alexander S. Szalay; Kalin Kanov; Randal C. Burns

Large-scale scientific simulations generate petascale data sets subsequently analyzed by groups of researchers, often in databases. We developed a software library, MPI-DB, to provide database services to scientific computing applications. As a bridge between CPU-intensive and data-intensive computations, MPI-DB exploits massive parallelism within large databases to provide scalable, fast service. It is built as a client-server framework, using MPI, with MPI-DB server acting as an intermediary between the user application running an MPI-DB client and the database servers. MPI-DB provides high-level objects such as multi-dimensional arrays, acting as an abstraction layer that effectively hides the database from the end user.


Proceedings of the 20th European MPI Users' Group Meeting on | 2013

Run-time creation of the turbulent channel flow database by an HPC simulation using MPI-DB

Jason Graham; Edward Givelberg; Kalin Kanov

We demonstrate a method based on MPI client-server implementation for storing the output of computations directly into the database. Our method automates the previously used inefficient ingestion process which required development of special tools for each simulation. In large-scale channel flow simulation experiments we were able to ingest the output data sets in real time, without delaying the simulation process. This was accomplished by building a Fortran interface to the MPI-DB software library and using it within the simulation code. The channel flow simulation data set will be exposed for analysis to researchers using the JHU Public Turbulence Database [7].


ieee international conference on high performance computing data and analytics | 2015

Particle tracking in open simulation laboratories

Kalin Kanov; Randal C. Burns

Particle tracking along streamlines and pathlines is a common scientific analysis technique, which has demanding data, computation and communication requirements. It has been studied in the context of high-performance computing due to the difficulty in its efficient parallelization and its high demands on communication and computational load. In this paper, we study efficient evaluation methods for particle tracking in open simulation laboratories. Simulation laboratories have a fundamentally different architecture from todays supercomputers and provide publicly-available analysis functionality. We focus on the I/O demands of particle tracking for numerical simulation datasets 100s of TBs in size. We compare data-parallel and task-parallel approaches for the advection of particles and show scalability results on data-intensive workloads from a live production environment. We have developed particle tracking capabilities for the Johns Hopkins Turbulence Databases, which store computational fluid dynamics simulation data, including forced isotropic turbulence, magnetohydrodynamics, channel flow turbulence and homogeneous buoyancy-driven turbulence.


Journal of Parallel and Distributed Computing | 2018

Remote visual analysis of large turbulence databases at multiple scales

Jesus J. Pulido; Daniel Livescu; Kalin Kanov; Randal C. Burns; Curtis Vincent Canada; James P. Ahrens; Bernd Hamann

Abstract The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. We present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methods supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. The database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.


ieee international conference on high performance computing data and analytics | 2011

I/O streaming evaluation of batch queries for data-intensive computational turbulence

Kalin Kanov; Eric Perlman; Randal C. Burns; Yanif Ahmad; Alexander S. Szalay

Collaboration


Dive into the Kalin Kanov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Graham

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hussein Aluie

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge