Joanna Kolodziej
University of Bielsko-Biała
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joanna Kolodziej.
Future Generation Computer Systems | 2013
Lizhe Wang; Samee Ullah Khan; Dan Chen; Joanna Kolodziej; Rajiv Ranjan; Cheng Zhong Xu; Albert Y. Zomaya
Reducing energy consumption for high end computing can bring various benefits such as reducing operating costs, increasing system reliability, and environmental respect. This paper aims to develop scheduling heuristics and to present application experience for reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique. In this paper, formal models are presented for precedence-constrained parallel tasks, DVFS-enabled clusters, and energy consumption. This paper studies the slack time for non-critical jobs, extends their execution time and reduces the energy consumption without increasing the tasks execution time as a whole. Additionally, Green Service Level Agreement is also considered in this paper. By increasing task execution time within an affordable limit, this paper develops scheduling heuristics to reduce energy consumption of a tasks execution and discusses the relationship between energy consumption and task execution time. Models and scheduling heuristics are examined with a simulation study. Test results justify the design and implementation of proposed energy aware scheduling heuristics in the paper.
Future Generation Computer Systems | 2015
Mihaela-Andreea Vasile; Florin Pop; Radu-Ioan Tutueanu; Valentin Cristea; Joanna Kolodziej
Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources. The proposed algorithm is suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications. We evaluate their performance in a realistic setting of CloudSim tool with respect to load-balancing, cost savings, dependency assurance for workflows and computational efficiency, and investigate the computing methods of these performance metrics at runtime. We proposed a hybrid approach for tasks scheduling in Heterogeneous Distributed Computing.The proposed algorithm considers hierarchical clustering of the available resources into groups.We considered different scheduling strategies for independent tasks and scheduling for DAG scheduling.We analyze the performance of our proposed algorithm through simulation by using and extending CloudSim.
Computing | 2016
Abdul Hameed; Alireza Khoshkbarforoushha; Rajiv Ranjan; Prem Prakash Jayaraman; Joanna Kolodziej; Pavan Balaji; Sherali Zeadally; Qutaibah M. Malluhi; Nikos Tziritas; Abhinav Vishnu; Samee Ullah Khan; Albert Y. Zomaya
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.
Journal of Computer and System Sciences | 2014
Jiaqi Zhao; Lizhe Wang; Jie Tao; Jinjun Chen; Weiye Sun; Rajiv Ranjan; Joanna Kolodziej; Achim Streit; Dimitrios Georgakopoulos
Abstract MapReduce is regarded as an adequate programming model for large-scale data-intensive applications. The Hadoop framework is a well-known MapReduce implementation that runs the MapReduce tasks on a cluster system. G-Hadoop is an extension of the Hadoop MapReduce framework with the functionality of allowing the MapReduce tasks to run on multiple clusters. However, G-Hadoop simply reuses the user authentication and job submission mechanism of Hadoop, which is designed for a single cluster. This work proposes a new security model for G-Hadoop. The security model is based on several security solutions such as public key cryptography and the SSL protocol, and is dedicatedly designed for distributed environments. This security framework simplifies the users authentication and job submission process of the current G-Hadoop implementation with a single-sign-on approach. In addition, the designed security framework provides a number of different security mechanisms to protect the G-Hadoop system from traditional attacks.
parallel computing | 2013
Hameed Hussain; Saif Ur Rehman Malik; Abdul Hameed; Samee Ullah Khan; Gage Bickler; Nasro Min-Allah; Muhammad Bilal Qureshi; Limin Zhang; Wang Yong-Ji; Nasir Ghani; Joanna Kolodziej; Albert Y. Zomaya; Cheng Zhong Xu; Pavan Balaji; Abhinav Vishnu; Fredric Pinel; Johnatan E. Pecero; Dzmitry Kliazovich; Pascal Bouvry; Hongxiang Li; Lizhe Wang; Dan Chen; Ammar Rayes
Classification of high performance computing (HPC) systems is provided.Current HPC paradigms and industrial application suites are discussed.State of the art in HPC resource allocation is reported.Hardware and software solutions are discussed for optimized HPC systems. An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.
Security and Communication Networks | 2013
Osman Khalid; Samee Ullah Khan; Sajjad Ahmad Madani; Khizar Hayat; Majid Iqbal Khan; Nasro Min-Allah; Joanna Kolodziej; Lizhe Wang; Sherali Zeadally; Dan Chen
Wireless sensor networks (WSNs) are emerging as useful technology for information extraction from the surrounding environment by using numerous small-sized sensor nodes that are mostly deployed in sensitive, unattended, and (sometimes) hostile territories. Traditional cryptographic approaches are widely used to provide security in WSN. However, because of unattended and insecure deployment, a sensor node may be physically captured by an adversary who may acquire the underlying secret keys, or a subset thereof, to access the critical data and/or other nodes present in the network. Moreover, a node may not properly operate because of insufficient resources or problems in the network link. In recent years, the basic ideas of trust and reputation have been applied to WSNs to monitor the changing behaviors of nodes in a network. Several trust and reputation monitoring (TRM) systems have been proposed, to integrate the concepts of trust in networks as an additional security measure, and various surveys are conducted on the aforementioned system. However, the existing surveys lack a comprehensive discussion on trust application specific to the WSNs. This survey attempts to provide a thorough understanding of trust and reputation as well as their applications in the context of WSNs. The survey discusses the components required to build a TRM and the trust computation phases explained with a study of various security attacks. The study investigates the recent advances in TRMs and includes a concise comparison of various TRMs. Finally, a discussion on open issues and challenges in the implementation of trust-based systems is also presented. Copyright
Knowledge Based Systems | 2015
Lizhe Wang; Hao Geng; Peng Liu; Ke Lu; Joanna Kolodziej; Rajiv Ranjan; Albert Y. Zomaya
Dictionary learning, which is based on sparse coding, has been frequently applied to many tasks related to remote sensing processes. Recently, many new non-analytic dictionary-learning algorithms have been proposed. Some are based on online learning. In online learning, data can be sequentially incorporated into the computation process. Therefore, these algorithms can train dictionaries using large-scale remote sensing images. However, their accuracy is decreased for two reasons. On one hand, it is a strategy of updating all atoms at once; on the other, the direction of optimization, such as the gradient, is not well estimated because of the complexity of the data and the model. In this paper, we propose a method of improved online dictionary learning based on Particle Swarm Optimization (PSO). In our iterations, we reasonably selected special atoms within the dictionary and then introduced the PSO into the atom-updating stage of the dictionary-learning model. Furthermore, to guide the direction of the optimization, the prior reference data were introduced into the PSO model. As a result, the movement dimension of the particles is reasonably limited and the accuracy and effectiveness of the dictionary are promoted, but without heavy computational burdens. Experiments confirm that our proposed algorithm improves the performance of the algorithm for large-scale remote sensing images, and our method also has a better effect on noise suppression.
Future Generation Computer Systems | 2011
Joanna Kolodziej; Fatos Xhafa
Independent Job Scheduling is one of the most useful versions of scheduling in grid systems. It aims at computing efficient and optimal mapping of jobs and/or applications submitted by independent users to the grid resources. Besides traditional restrictions, mapping of jobs to resources should be computed under high degree of heterogeneity of resources, the large scale and the dynamics of the system. Because of the complexity of the problem, the heuristic and meta-heuristic approaches are the most feasible methods of scheduling in grids due to their ability to deliver high quality solutions in reasonable computing time. One class of such meta-heuristics is Hierarchic Genetic Strategy (HGS). It is defined as a variant of Genetic Algorithms (GAs) which differs from the other genetic methods by its capability of concurrent search of the solution space. In this work, we present an implementation of HGS for Independent Job Scheduling in dynamic grid environments. We consider the bi-objective version of the problem in which makespan and flowtime are simultaneously optimized. Based on our previous work, we improve the HGS scheduling strategy by enhancing its main branching operations. The resulting HGS-based scheduler is evaluated under the heterogeneity, the large scale and dynamics conditions using a grid simulator. The experimental study showed that the HGS implementation outperforms existing GA-based schedulers proposed in the literature.
2011 International Conference on P2P, Parallel, Grid, Cloud and Internet Computing | 2011
Joanna Kolodziej; Samee Ullah Khan; Fatos Xhafa
Because of its sheer size, Computational Grids (CGs) require advanced methodologies and strategies to efficiently schedule users tasks and applications to resources. Scheduling becomes even more challenging when energy efficiency, classical make span criterion and user perceived Quality of Service (QoS) are treated as first-class objectives in CG resource allocation methodologies. In this paper we approach the independent batch scheduling in CG as a biobjective minimization problem with make span and energy consumption as the scheduling criteria. We use the Dynamic Voltage Scaling (DVS) methodology for reducing the cumulative power energy utilized by the system resources. We develop two Genetic Algorithms (GAs) with elitist and struggle replacement mechanisms as energy-aware schedulers. The proposed algorithms were experimentally evaluated for four CG size scenarios in static and dynamic modes. The simulation results showed that our proposed GA-based schedulers fairly reduce the energy usage to a level that is sufficient to maintain the desired quality level(-s)
Information Sciences | 2012
Joanna Kolodziej; Samee Ullah Khan
Task scheduling and resource allocation are the key rationale behind the computational grid. Distributed resource clusters usually work in different autonomous domains with their own access and security policies that have a great impact on the successful task execution across the domain boundaries. Heuristics and metaheuristics are the effective technologies for scheduling in grids due to their ability to deliver high quality solutions in reasonable time. In this paper, we develop a Hierarchic Genetic Scheduler (HGS-Sched) for improving the effectiveness of the single-population genetic-based schedulers in the dynamic grid environment. The HGS-Sched enables a concurrent exploration of the solution space by many small dependent populations. We consider a bi-objective independent batch job scheduling problem with makespan and flowtime minimized in hierarchical mode (makespan is a dominant criterion). The empirical results show the high effectiveness of the proposed method in comparison with the mono-population and hybrid genetic-based schedulers.