Tomasz Danek
AGH University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tomasz Danek.
computer, information, and systems sciences, and engineering | 2010
A. Kowal; Adam Piórkowski; Tomasz Danek; Anna Pięta
In this article efficiency of using component-based software for seismic wave field modeling is presented. The most common component solutions like: .NET, Java and Mono were analyzed for various operating systems and hardware platform combinations. Obtained results clearly indicate that the component approach is able to give satisfactory results in this kind of applications, but global solution efficiency can strongly depends on operating system and hardware. The most important conclusion of this work is that for this kind of computations and at this stage of component technology development there are almost no differences between commercial and free platforms.
computer, information, and systems sciences, and engineering | 2010
Adam Piórkowski; Anna Pięta; A. Kowal; Tomasz Danek
An implementation and performance analysis of heat transfer modeling using most popular component environments is a scope of this article. The computational problem is described, and the proposed solution of decomposition for parallelization is shown. The implementation is prepared for MS .NET, Sun Java and Mono. Tests are done for various operating systems and hardware platform combinations. The performance of calculations is experimentally indicated and analyzed. The most interesting issue is the communication tuning in distributed component software – proposed method can speed up computational time, but the final time depends also on the network connections performance in component environments. These results are presented and discussed.
international geoscience and remote sensing symposium | 2009
Tomasz Danek
General-purpose computing on graphics processing units (GPGPU) is a fast developing method of high performance computing (HPC). In some cases even a low-end video card can be several to dozens times faster than a modem CPU core. Seismic wave filed modeling is one of the problems of this kind. But in some modern methods of seismic exploration or seismology it is possible that hundreds of thousands of forward modelings are needed for final solution of the computational problem. Using mulit-GPU and hybrid CPU-GPU computations in distributed computer environments seems to be a natural next step in development of these method of HPC.
international conference on computational science | 2009
Tomasz Danek
GPGPU - general-purpose computing on graphics processing units is a very effective and inexpensive way of dealing with time consuming computations. In some cases even a low end GPU can be a dozens of times faster than a modern CPUs. Utilization of GPGPU technology can make a typical desktop computer powerful enough to perform necessary computations in a fast, effective and inexpensive way. Seismic wave field modeling is one of the problems of this kind. Some times one modeled common shot-point gather or one wave field snapshot can reveal the nature of an analyzed wave phenomenon. On the other hand these kinds of modelings are often a part of complex and extremely time consuming methods with almost unlimited needs of computational resources. This is always a problem for academic centers, especially now when times of generous support from oil and gas companies have ended.
Siam Journal on Applied Mathematics | 2016
Tomasz Danek; Michael A. Slawinski
We examine the Backus averaging method---which, in general, allows one to represent a series of parallel layers by a transversely isotropic medium---using a repetitive shale-sandstone model. To examine this method in the context of experimental data, we perturb the model with random errors, in particular, the values of layer thicknesses and elasticity parameters. We analyze the effect of perturbations on the parameters of the transversely isotropic medium. Also, we analyze their effect on the relation between layer thicknesses and wavelengths. To gain insight into the strength of anisotropy of that medium, we invoke the Thomsen parameters.
international geoscience and remote sensing symposium | 2011
Michal Rumanek; Tomasz Danek; Andrzej Lesniak
This paper presents preliminary results of studies concerning possibilities of high performance processing of satellite images using graphics processing units. Even if numerical procedures used for this kind of computation are not complicated and fast, size of typical satellite scene makes them time consuming. This problem is especially troublesome when many, sometimes hundredths of scenes, have to be processed in one computational task. At the present state of the study, using distributed GPU-based computational infrastructure allows to reduce time of typical computation 5 to 6 times. It should be stressed that these results were obtained using three old CUDA 2.1 NVIDIA Quadro 1600M card cluster which efficiency cannot be compared to efficiency of modern GPUs.
Acta Geophysica | 2015
Tomasz Danek; Michael A. Slawinski
A generally anisotropic elasticity tensor can be related to its closest counterparts in various symmetry classes. We refer to these counterparts as effective tensors in these classes. In finding effective tensors, we do not assume a priori orientations of their symmetry planes and axes. Knowledge of orientations of Hookean solids allows us to infer properties of materials represented by these solids. Obtaining orientations and parameter values of effective tensors is a highly nonlinear process involving finding absolute minima for orthogonal projections under all three-dimensional rotations. Given the standard deviations of the components of a generally anisotropic tensor, we examine the influence of measurement errors on the properties of effective tensors. We use a global optimization method to generate thousands of realizations of a generally anisotropic tensor, subject to errors. Using this optimization, we perform a Monte Carlo analysis of distances between that tensor and its counterparts in different symmetry classes, as well as of their orientations and elasticity parameters.
BDAS | 2018
Adrian Bogacz; Tomasz Danek; Katarzyna Miernik
In this paper, a mini-expert platform for joint inversion is presented. The Pareto inversion scheme was applied to eliminate any typical problems of this kind of inversion, such as arbitrarily chosen target function weights and laborious interactivity. Particle Swarm Optimization was used as the main optimization engine. The presented solution is written entirely in JavaScript and provides easy access to core system functions, even for non-technical users. As an example, a geophysical problem of joint inversion of surface waves was chosen, but the solution is capable of inverting any kind of data as long as two or more target functions can be provided. All obtained results were compared with software written by the authors in C in terms of both results and efficiency.
Acta Geophysica | 2018
Mateusz Zaręba; Tomasz Danek
In this paper, we present an analysis of borehole seismic data processing procedures required to obtain high-quality vertical stacks and polarization angles in the case of walkaway VSP (vertical seismic profile) data gathered in challenging conditions. As polarization angles are necessary for more advanced procedures like anisotropy parameters determination, their quality is critical for proper media description. Examined Wysin-1 VSP experiment data indicated that the best results can be obtained when rotation is performed for each shot on data after de-noising and vertical stacking of un-rotated data. Additionally, we proposed a procedure of signal matching that can substantially increase data quality.
Archive | 2013
Monika Chuchro; Kamil Szostek; Adam Piórkowski; Tomasz Danek
Analysis of logs of remote network services is one of the most difficult and time consuming task—its amount and variety of types are still growing. With the increasing number of services increases the amount of logs generated by computer programs and their analysis becomes impossible for the common user. However, the same analysis is essential because it provides a large amount of information necessary for the maintenance of the system in good shape thus ensuring the safety of their users. All ways of relevant information filtering, which reduce the log for further analysis, require human expertise and too much work. Nowadays, researches take the advantage of data mining with techniques such as genetic and clustering algorithms, neural networks etc., to analyze system’s security logs in order to detect intrusions or suspicious activity. Some of these techniques make it possible to achieve satisfactory results, yet requiring a very large number of attributes gathered by network traffic to detect useful information. To solve this problem we use and evaluate some data mining techniques (Decision Trees, Correspondence Analysis and Hierarchical Clustering) in a reduced number of attributes on some log data sets acquired from a real network, in order to classify traffic logs as normal or suspicious. The results obtained allow an independent interpretation and to determine which attributes were used to make a decision. This approach reduces the number of logs the administrator is forced to view, also contributes to improve efficiency and help identify new types and sources of attacks.