Karoly Bosa
Johannes Kepler University of Linz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Karoly Bosa.
Journal of Symbolic Computation | 2003
Wolfgang Schreiner; Christian Mittermaier; Karoly Bosa
We describe the design and use of Distributed Maple, an environment for executing parallel computer algebra programs on multiprocessors and heterogeneous clusters. The system embeds kernels of the computer algebra system Maple as computational engines into a networked coordination layer implemented in the programming language Java. On the basis of a comparatively high-level programming model, one may wrate parallel Maple programs that show good speedups in medium-scaled environments. We report on the use of the system for the parallelization of various functions of the algebraic geometry library CASA and demonstrate how design decisions affect the dynamic behaviour and performance of a parallel application. Numerous experimental results allow comparison of Distributed Maple with other systems for parallel computer algebra.
Correct Software in Web Applications and Web Services | 2015
Karoly Bosa; Roxana-Maria Holom; Mircea Boris Vleju
In our former work, we have showed that cloud computing still requires lots of fundamental research. Among many other existing problems in cloud computing, we identified the lack of client orientation and lack of formal foundations as serious deficiencies. In this chapter, we give a summary on our research and discuss the architectures as well as the formal models of some software solutions with which we are going to address (a part of) these two problems in cloud computing.The solution we propose is a novel and uniform client-cloud interaction approach by which cloud service owners, who may be different from the cloud providers, are able to fully control the usage of their services in the case of each user subscription. In this context, any cloud service can be invoked by distinct devices; therefore, the content must be adapted to various channels and end devices, in particular with respect to needs arising from mobile clients. For a quick and seamless integration between the cloud provider’s identity management system and the system used by the client, we introduce the concept of a client-centric tool. An extension of the client-cloud interaction model enables client-to-client interaction (CTCI) in an almost direct way, so that the involvement of cloud services is transparent to the users.In this chapter, we propose a formalization of this solution that incorporates the major advantages of abstract state machines (ASMs) and ambient calculus by specifying the algorithms of executable components (agents) in terms of ASMs and by describing their communication topology, locality, and mobility in the terms of ambient calculus.
Archive | 2008
Karoly Bosa; Wolfgang Schreiner
In this paper, we compare two implementations of a grid-based software system on the grid middleware Globus Toolkit 4 and gLite, respectively. This system called “Grid-Enabled SEE++” is a grid-based simulation software that supports the diagnosis and treatment of certain eye motility disorders (strabismus). First, we developed a parallel version of the software with the help of Globus 4. Since we met with some limitations of Globus 4, we also designed and developed a version of SEE++ based on gLite. We focus on the differences between the initial Globus version and the gLite version of our software system and report on some comparative benchmark results.
Archive | 2009
Wolfgang Schreiner; Karoly Bosa; Andreas Langegger; Thomas Leitner; Bernhard Moser; Szilárd Páll; Volkmar Wieser; Wolfram Wöß
The core goal of parallel computing is to speedup computations by executing independent computational tasks concurrently (“in parallel”) on multiple units in a processor, on multiple processors in a computer, or on multiple networked computers which may be even spread across large geographical scales (distributed and grid computing); it is the dominant principle behind “supercomputing” respectively “high performance computing”. For several decades, the density of transistors on a computer chip has doubled every 18–24 months (“Moore’s Law”); until recently, this rate could be directly transformed into a corresponding increase of a processor’s clock frequency and thus into an automatic performance gain for sequential programs. However, since also a processor’s power consumption increases with its clock frequency, this strategy of “frequency scaling” became ultimately unsustainable: since 2004 clock frequencies have remained essentially stable and additional transistors have been primarily used to build multiple processors on a single chip (multi-core processors). Today therefore every kind of software (not only “scientific” one) must be written in a parallel style to profit from newer computer hardware.
european conference on parallel processing | 2001
Wolfgang Schreiner; Gábor Kusper; Karoly Bosa
We have extended the parallel computer algebra system Distributed Maple by fault tolerance mechanisms such that computations are not any more limited by the meantime between failures. This is complicated by the fact that task arguments and results may embed task handles and that the systems scheduling layer has only a little information about the computing layer. Nevertheless, the mostly functional parallel programming model makes it possible with relatively simple means.
Archive | 2016
Abdelkader Hameurlain; Josef Küng; Roland Wagner; Klaus-Dieter Schewe; Karoly Bosa
Cloud computing is evolving as a new paradigm in service computing in order to reduce initial infrastructure investment and maintenance cost. Virtualization technology is used to create virtual infrastructure by sharing the physical resources through virtual machine. By using these virtual machines, cloud computing technology enables the effective usage of resources with economical profit for customers. Because of these advantages, scientific community is also thinking to shift from grid and cluster computing to cloud computing. However, this virtualization technology comes with significant performance penalties. Moreover, scientific jobs are different from commercial workload. In order to understand the reliability and feasibility of cloud computing for scientific workload, we have to understand the technology and its performance. In this work, we have evaluated the scientific jobs as well as standard benchmarks on private and public cloud to understand exact performance penalties involved in adoption of cloud computing. These jobs are categorized into CPU, memory, N/W and I/O intensive. We also analyzed the results and compared the private and public cloud virtual machine’s performance by considering execution time as well as price. Results show that the cloud computing technology faces considerable performance overhead because of virtualization technology. Therefore, cloud computing technology needs improvement to execute scientific workload.
Concurrency and Computation: Practice and Experience | 2016
Marc Frîncu; Karoly Bosa
The purpose of this special issue is to collect the best papers presented at the Distributed Computing (DC) track of the 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) held on September 22–25, 2014 in Timisoara, Romania. The multidisciplinary nature of the conference brings together people from various computer science areas, including symbolic computing, numerical analysis, multi-agent systems, natureinspired processing, and distributed/parallel computing. These topics provide the attendees the chance to uncover interdisciplinary research ideas and concrete-use cases for their work. Among them, distributed computing, with an increasing interest in elastic distributed applications on platforms such as clouds and high-performance computing on clouds, is in the unique position. It allows researchers from various fields to extend their experiments and test their ideas on large-scale infrastructures. In this frame, the DC track is a great opportunity for scientists working on topics related to clouds, grids, P2P, Internet of things, and parallel and distributed systems in general to present their work, discover new challenges, and find real-use cases and applications for their research. With the global focus leaning towards (inter-) cloud and utility computing, researchers start to feel the benefit of these systems too. However, there is much to be done before existing applications from these various fields can be fully ported on these new platforms. SYNASC through the DC track is therefore a great opportunity for addressing the interdisciplinary applicability of distributed scalable systems. For this special issue, we have selected three top papers to be published as extended versions. The papers are from distinct areas with focus on theoretical aspects related to semantics for a DNAinspired language with applicability in parallel and distributed computing [2], queries on distributed databases [3], and model checking on MapReduce [1]. As such, it covers a broad spectrum ranging from concurrent languages and distributed databases to cloud applications. “Correct metric semantics for a language inspired by DNA computing” [2] depicts a novel way of writing concurrent programs using the concept of DNA computing. There are already successful experiments using DNA computing as a massive parallel computer, and so, formally defining a DNA computing algebra is a next logical step. If successful, this formalism could enable programmers to write inherently parallel algorithms. The topic is closely related to P systems and the gamma formalism. “Incremental computations over strongly distributed databases” [3] deals with the systematic exploitation of logical reduction techniques to big distributed data handling. The particular applications are views and parallel updates over large-scale distributed databases as well as handling of queries over different generations of databases. The solution enables to reduce the overhead of running distributed queries by performing as much of the queries as possible locally. Finally, “Distributed CTL model checking using MapReduce: theory and practice” [1] takes advantage of the cloud platforms’ ability to execute massively parallel jobs to run formal verification tools. These must undergo a deep technological transformation to take advantage of the cloud architecture. The authors address this issue by introducing a distributed approach to verification of computation tree logic formulas on very large state spaces using a MapReduce approach.
parallel computing | 2012
Volkmar Wieser; Clemens Grelck; Holger Schöner; Peter Haslinger; Karoly Bosa; Bernhard Moser
This paper addresses the gap between envisioned hardware-virtualized techniques for GPU programming and a conventional approach from the point of view of an application engineer taking software engineering aspects like maintainability, understandability and productivity, and resulting achieved gain in performance and scalability into account. This gap is discussed on the basis of use cases from the field of image processing, and illustrated by means of performance benchmarks as well as evaluations regarding software engineering productivity.
Scalable Computing: Practice and Experience | 2011
Klaus-Dieter Schewe; Karoly Bosa; Harald Lampesberger; Ji Ma; Mariam Rady; Boris Vleju
international symposium on parallel and distributed computing | 2007
Karoly Bosa; Wolfgang Schreiner; Michael Buchberger; Thomas Kaltofen