Sándor Ács
Hungarian Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sándor Ács.
ieee international conference on cloud computing technology and science | 2014
Stephen Winter; Christopher J. Reynolds; Tamas Kiss; Gabor Terstyanszky; Pamela Greenwell; Sharron McEldowney; Sándor Ács; Péter Kacsuk
Cloud technology has the potential for widening access to high-performance computational resources for e-science research, but barriers to engagement with the technology remain high for many scientists. Workflows help overcome barriers by hiding details of underlying computational infrastructure and are portable between various platforms including cloud; they are also increasingly accepted within e-science research communities. Issues arising from the range of workflow systems available and the complexity of workflow development have been addressed by focusing on workflow interoperability, and providing customised support for different science communities. However, the deployments of such environments can be challenging, even where user requirements are comparatively modest. RESWO (Reconfigurable Environment Service for Workflow Orchestration) is a virtual platform-as-a-service cloud model that allows leaner customised environments to be assembled and deployed within a cloud. Suitable distributed computation resources are not always easily affordable and can present a further barrier to engagement by scientists. Desktop grids that use the spare CPU cycles available within an organisation are an attractively inexpensive type of infrastructure for many, and have been effectively virtualised as a cloud-based resource. However, hosts in this environment are volatile: leading to the tail problem, where some tasks become randomly delayed, affecting overall performance. To solve this problem, new algorithms have been developed to implement a cloudbursting scheduler in which durable cloud-based CPU resources may execute replicas of jobs that have become delayed. This paper describes experiences in the development of a RESWO instance in which a desktop grid is buttressed with CPU resources in the cloud to support the aspirations of bioscience researchers. A core component of the architecture, the cloudbursting scheduler, implements an algorithm to perform late job detection, cloud resource management and job monitoring. The experimental results obtained demonstrate significant performance improvements and benefits illustrated by use cases in bioscience research.
ieee international conference on cloud computing technology and science | 2011
Christopher J. Reynolds; Stephen Winter; Gabor Terstyanszky; Tamas Kiss; Pamela Greenwell; Sándor Ács; Péter Kacsuk
Scientific workflows are common in biomedical research, particularly for molecular docking simulations such as those used in drug discovery. Such workflows typically involve data distribution between computationally demanding stages which are usually mapped onto large scale compute resources. Volunteer or Desktop Grid (DG) computing can provide such infrastructure but has limitations resulting from the heterogeneous nature of the compute nodes. These constraints mean that reducing the make span of a given workflow stage submitted to a DG becomes problematic. Late jobs can significantly affect the make span, often completing long after the bulk of the computation has finished. In this paper we present a system capable of significantly reducing the make span of a scientific workflow. Our system comprises a DG which is dynamically augmented with an infrastructure as a service (IaaS) Cloud. Using this solution, the Cloud resources are used to process replicated late jobs. Our system comprises a core component termed the scheduler, which implements an algorithm to perform late job detection, Cloud resource management (instantiation and reuse), and job monitoring. We offer a formal definition of this algorithm, whilst we also provide an evaluation of our prototype using a production scientific workflow.
international conference on wireless mobile communication and healthcare | 2011
Miklos Kozlovszky; János Sicz-Mesziár; János Ferenczi; Judit Márton; Gergely Windisch; Viktor Kozlovszky; Péter Kotcauer; Anikó Boruzs; Pál Bogdanov; Zsolt Meixner; Krisztián Karóczkai; Sándor Ács
We have developed a combined Android based mobile data acquisition (DAQ) and emergency management solution, which can collect information remotely from patient and send the information towards to the medical data and dispatcher centre for further processing. The mobile device is capable to collect information from various sensors via Bluetooth and USB connection, and further more able to capture and forward manually initiated alarm signals in case of an emergency situation. Beside the alarm signal the system collects and sends information about the patient’s location, and it also enables two ways audio communication between the central dispatcher and the patient automatically. The developed software solution is suitable for different skilled users. Its user interface is highly configurable to support elderly persons (high contrast, huge characters, simple UI, etc.), and also provides advanced mode for the “power” users. The developed system becomes part of our testing program, which is carried out in our Hungarian Living Lab infrastructure. The combination of a mobile DAQ device and mobile emergency alarm device within a single software solution enables care givers to provide better and more effective services in elderly patient monitoring.
symposium on applied computational intelligence and informatics | 2014
Sándor Ács; Miklos Kozlovszky; Péter Kacsuk
Companies (even SMEs) are roused by the success and potentials of the public clouds and they build their own private cloud infrastructures. Thus, they open the door to an easier and more flexible way for outsourcing their IT services than before. However, the currently available software solutions still do not provide seamless extensibility by cloud bursting. Therefore, the IaaS users have to prepare their images in every infrastructure. This paper presents the criteria for idealistic cloud bursting and introduces a method that overcomes the current cloud bursting issues (e.g. different administration domains and networking policies). The proposed technique uses nested virtualization, that reduces the complexity of the cloud bursting procedure. Furthermore, we have evaluated the applicability of our design by performance tests. The evaluation showed that the seamless extensibility has a cost of 5-10% overhead on the deployment time.
Archive | 2014
Sándor Ács; Miklos Kozlovszky; Péter Kotcauer
Large-scale high performance systems have significant amount of processing power. One example of such system is the HP-SEE’s HPC and supercomputing infrastructures, which is geologically distributed, and provides 24/7, high performance/high throughput computing services primarily for high-end research communities. Due to the direct impact on research and indirectly on economy such systems can be categorized as critical infrastructure. System features (like non-stop availability, geographically distributed and community based usage) make such infrastructure vulnerable and valuable targets of malicious attacks. In order to decrease the threat, we designed the Advanced Vulnerability Assessment Tool (AVAT) suitable for HPC/supercomputing systems. Our developed solution can submit vulnerability assessment jobs into the HP-SEE infrastructure and run vulnerability assessment on the infrastructure components. It collects assessment information by the decentralized Security Monitor and archives the results received from the components and visualize them via a web interface for the local/regional administrators. In this paper we present our Advanced Vulnerability Assessment Tool, we describe its functionalities and provide its monitoring test results captured in real systems.
parallel, distributed and network-based processing | 2013
Sándor Ács; Mark Gergely; Péter Kacsuk; Miklos Kozlovszky
Cloud computing is the dominating paradigm in distributed computing. The most popular open source cloud solutions support different type of storage subsystems, because of the different needs of the deployed services (in terms of performance, flexibility, cost-effectiveness). In this paper, we investigate the supported standard and open source storage types and create a classification. We point out that the Internet Small Computer System Interface (iSCSI) based block level storage can be used for I/O intensive services currently. However, the ATA-over-Ethernet (AoE) protocol uses fewer layers and operates on lower level which makes it more lightweight and faster than iSCSI. Therefore, we proposed an architecture for AoE based storage support in OpenNebula cloud. The novel storage solution was implemented and the performance evaluation shows that the I/O throughput of the AoE based storage is better (32.5-61.5%) compared to the prior iSCSI based storage and the new storage solution needs less CPU time (41.37%) to provide the same services.
european conference on parallel processing | 2014
Sándor Ács; Zsolt Németh; Mark Gergely
Infrastructure-as-a-Service (IaaS) clouds are widely used today, however there are no standardized or commonly used performance evaluation methods and metrics that can be used to compare the services of the different providers. Performance evaluation tools and benchmarks are able to grasp some aspects or details of performance but for various reasons are not capable to characterize cloud performance. Our aim is to collect these elementary or primitive facets of performance and derive high-level aggregated and qualitative performance characterization semantically far above the output of tools and benchmarks. We designed and implemented a framework that collects low-level raw performance data (in terms of CPU, disk, memory and network) of cloud providers based on standard benchmark tools and these data are aggregated and evaluated using a hierarchical fuzzy system. In this process performance characteristics are associated with symbolic values and fuzzy inference is applied to produce the normalized qualitative comparable and readable performance metrics. In this paper, we discuss the issues of cloud performance analysis, present the concept and implementation of our method, illustrate the proposed solution by comparing –in terms of performance– the general purpose medium instance type of the Amazon EC2 cloud (in Ireland) and the standard instance type of the OpenNebula installation at MTA SZTAKI.
Archive | 2011
Péter Kacsuk; Attila Csaba Marosi; Miklos Kozlovszky; Sándor Ács; Zoltan Farkas
This chapter introduces the existing connectivity and interoperability issues of Clouds, Grids, and Clusters and provides solutions to overcome these issues. The paper explains the principles of parameter sweep job execution by P-GRADE portal and gives some details on the concept of parameter sweep job submission to various Grids by the 3G Bridge. Then it proposes several possible solution variants how to extend the parameter sweep job submission mechanism of P-GRADE and 3G Bridge toward Cloud systems. Finally, it shows the results of performance measurements that were gained for the proposed solution variants.
international conference on cloud computing | 2012
Sándor Ács; Péter Kacsuk; Miklos Kozlovszky
Archive | 2009
Sándor Ács; Miklos Kozlovszky; Zoltán Balaton