Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abdulrahman Azab is active.

Publication


Featured researches published by Abdulrahman Azab.


international conference on computer engineering and systems | 2008

An adaptive decentralized scheduling mechanism for peer-to-peer Desktop Grids

Abdulrahman Azab; Hisham A. Kholidy

P2P desktop grid has recently been an attractive computing paradigm for high throughput applications. Desktop grid computing is complicated by heterogeneous capabilities, failures, volatility, and lack of trust because it is based on desktop computers. One of the important challenges of P2P desktop grid computing is the development of scheduling mechanisms that adapt to such a dynamic computing environment. This paper proposes an adaptive decentralized scheduling mechanism in which matchmaking is performed between resource requirements of outstanding tasks and resource capabilities of available workers. The matchmaking approach is based on fuzzy logic. Experimental results show that, implementing the proposed fuzzy matchmaking based scheduling mechanism maximized the resource utilization of executing workers without exceeding the maximum execution time of the task.


ieee international conference on cloud engineering | 2017

Enabling Docker Containers for High-Performance and Many-Task Computing

Abdulrahman Azab

Docker is the most popular and user friendlyplatform for running and managing Linux containers. This isproven by the fact that vast majority of containerized tools arepackaged as Docker images. A demanding functionality is toenable running Docker containers inside HPC job scripts forresearchers to make use of the flexibility offered by containersin their real-life computational and data intensive jobs. The maintwo questions before implementing such functionality are: how tosecurely run Docker containers within cluster jobs? and how tolimit the resource usage of a Docker job to the borders defined bythe HPC queuing system? This paper presents Socker, a securewrapper for running Docker containers on Slurm and similarqueuing systems. Socker enforces the execution of containerswithin Slurm jobs as the submitting user instead of root, as wellas enforcing the inclusion of containers in the cgroups assignedby the queuing system to the parent jobs. Different from otherDocker supported containers-for-hpc platform, socker uses theunderlaying Docker engine instead of replacing it. To eveluatesocker, it has been tested for running MPI Docker jobs on Slurm. It has been also tested for Many-task computing (MTC) on interconnectedclusters. Socker has proven to be secure, as well asintroducing no additional overhead to the one introduced alreadyby the Docker engine.


GigaScience | 2017

GSuite HyperBrowser: Integrative analysis of dataset collections across the genome and epigenome

Boris Simovski; Daniel Vodák; Sveinung Gundersen; Diana Domanska; Abdulrahman Azab; Lars Holden; Marit Holden; Ivar Grytten; Knut Dagestad Rand; Finn Drabløs; Morten Johansen; Antonio Mora; Christin Lund-Andersen; Bastian Fromm; Ragnhild Eskeland; Odd S. Gabrielsen; Egil Ferkingstad; Sigve Nakken; Mads Bengtsen; Hildur Sif Thorarensen; Johannes Andreas Akse; Ingrid K. Glad; Eivind Hovig; Geir Kjetil Sandve

Abstract Background: Recent large-scale undertakings such as ENCODE and Roadmap Epigenomics have generated experimental data mapped to the human reference genome (as genomic tracks) representing a variety of functional elements across a large number of cell types. Despite the high potential value of these publicly available data for a broad variety of investigations, little attention has been given to the analytical methodology necessary for their widespread utilisation. Findings: We here present a first principled treatment of the analysis of collections of genomic tracks. We have developed novel computational and statistical methodology to permit comparative and confirmatory analyses across multiple and disparate data sources. We delineate a set of generic questions that are useful across a broad range of investigations and discuss the implications of choosing different statistical measures and null models. Examples include contrasting analyses across different tissues or diseases. The methodology has been implemented in a comprehensive open-source software system, the GSuite HyperBrowser. To make the functionality accessible to biologists, and to facilitate reproducible analysis, we have also developed a web-based interface providing an expertly guided and customizable way of utilizing the methodology. With this system, many novel biological questions can flexibly be posed and rapidly answered. Conclusions: Through a combination of streamlined data acquisition, interoperable representation of dataset collections, and customizable statistical analysis with guided setup and interpretation, the GSuite HyperBrowser represents a first comprehensive solution for integrative analysis of track collections across the genome and epigenome. The software is available at: https://hyperbrowser.uio.no.


ieee international conference on dependable, autonomic and secure computing | 2014

A Finite State Hidden Markov Model for Predicting Multistage Attacks in Cloud Systems

Hisham A. Kholidy; Abdelkarim Erradi; Sherif Abdelwahed; Abdulrahman Azab

Cloud computing significantly increased the security threats because intruders can exploit the large amount of cloud resources for their attacks. However, most of the current security technologies do not provide early warnings about such attacks. This paper presents a Finite State Hidden Markov prediction model that uses an adaptive risk approach to predict multi-staged cloud attacks. The risk model measures the potential impact of a threat on assets given its occurrence probability. The attacks prediction model was integrated with our autonomous cloud intrusion detection framework (ACIDF) to raise early warnings about attacks to the controller so it can take proactive corrective actions before the attacks pose a serious security risk to the system. According to our experiments on DARPA 2000 dataset, the proposed prediction model has successfully fired the early warning alerts 39.6 minutes before the launching of the LLDDoS1.0 attack. This gives the auto response controller ample time to take preventive measures.


international conference on cloud computing | 2009

Decentralized Service Allocation in a Broker Overlay Based Grid

Abdulrahman Azab; Hein Meling

Grid computing is based on coordinated resource sharing in a dynamic environment of multi-institutional virtual organizations. Data exchanges, and service allocation, are challenging problems in the field of Grid computing. This is due to the decentralization of Grid systems. Building decentralized Grid systems with efficient resource management and software component mechanisms is a need for achieving the required efficiency and usability of Grid systems. In this work, a decentralized Grid system model is presented in which, the system is divided into virtual organizations each controlled by a broker. An overlay network of brokers is responsible for global resource management and managing allocation of services. Experimental results show that, the system achieves dependable performance with various loads of services, and broker failures.


european modelling symposium | 2013

Slick: A Coordinated Job Allocation Technique for Inter-Grid Architectures

Abdulrahman Azab; Hein Meling

Large scale Grid computing systems are often organized as an inter-Grid architecture, where multiple Grid domains are interconnected through their local broker. In this context, the main challenge is to devise appropriate job scheduling policies that can satisfy goals such as global load balancing together with maintaining the local policies of the different Grids. This paper presents SLICK, a scalable resource discovery and job scheduling technique for broker based interconnected Grid domains. In this technique we leave local scheduling policies untouched, while inter-Grid scheduling decisions are handled by a separate scheduler installed on local brokers. To make suitable scheduling decisions, brokers must collect information about current resource usage at other domains. To this end, brokers periodically exchange their local domains resource usage information with their neighbors. For large scale systems, this periodic exchange naturally leads to a significant amount of traffic. To avoid that the broker overlay becomes overloaded, we introduce an aggregation technique to reduce and combine worker resource usage information. We have compared SLICK with three other techniques through simulation of 50,000 node Grid divided into 512 domains. We used synthetic job sequences with a total load of 80,000 jobs. Our results show that SLICK is better at maintaining the overall throughput and load balancing than previous techniques.


distributed applications and interoperable systems | 2014

A Fuzzy-Logic Based Coordinated Scheduling Technique for Inter-grid Architectures

Abdulrahman Azab; Hein Meling; Reggie Davidrajuh

Inter-grid is a composition of small interconnected grid domains; each has its own local broker. The main challenge is to devise appropriate job scheduling policies that can satisfy goals such as global load balancing together with maintaining the local policies of the different domains. Existing inter-grid methodologies are based on either centralised meta-scheduling or decentralised scheduling which carried is out by local brokers, but without proper coordination. Both are suitable interconnecting grid domains, but breaks down when the number of domains become large. Earlier we proposed Slick, a scalable resource discovery and job scheduling technique for broker based interconnected grid domains, where inter-grid scheduling decisions are handled by gateway schedulers installed on the local brokers. This paper presents a decentralised scheduling technique for the Slick architecture, where cross-grid scheduling decisions are made using a fuzzy-logic based algorithm. The proposed technique is tested through simulating its implementation on 512 interconnected Condor pools. Compared to existing techniques, our results show that the proposed technique is better at maintaining the overall throughput and load balancing with increasing number of interconnected grids.


distributed applications and interoperable systems | 2012

Stroll: a universal filesystem-based interface for seamless task deployment in grid computing

Abdulrahman Azab; Hein Meling

Developing applications for solving compute intensive problems is not trivial. Despite availability of a range of Grid computing platforms, domain specialists and scientists only rarely take advantage of these computing facilities. One reason for this is the complexity of Grid computing, and the need to learn a new programming environment to interact with the Grid. Typically, only a few programming languages are supported, and often scientists use special-purpose languages that are not supported by most Grid platforms. Moreover, users cannot easily deploy their compute tasks to multiple Grid platforms without rewriting their program to use different task submission interfaces. In this paper we present Stroll, a universal filesystem-based interface for seamless task submission to one or more Grid facilities. Users interact with the Grid through simple read and write filesystem commands. Stroll allows all categories of users to submit and manage compute tasks both manually, and from within their programs, which may be written in any language. Stroll has been implemented on Windows and Linux, and we demonstrate that we can submit the same compute tasks to both Condor and Unicore clusters. Our evaluation shows the overhead of Stroll to negligible. Comparing the code complexity of a Stroll compute task with command-line clients and Grid APIs show that Stroll can eliminated up to 95% of the complexity.


ieee international conference on high performance computing data and analytics | 2010

Peer-to-Peer Desktop Grids Based on an Adaptive Decentralized Scheduling Mechanism

H. Arafat Ali; Ahmed I. Saleh; Amany Sarhan; Abdulrahman Azab

This article proposes an adaptive fuzzy logic based decentralized scheduling mechanism that will be suitable for dynamic computing environment in which matchmaking is achieved between resource requirements of outstanding tasks and resource capabilities of available workers. Feasibility of the proposed method is done via real time system. Experimental results show that implementing the proposed fuzzy matchmaking based scheduling mechanism maximized the resource utilization of executing workers without exceeding the maximum execution time of the task. It is concluded that the efficiency of FMA-based decentralized scheduling, in the case of parallel execution, is reduced by increasing the number of subtasks.


cluster computing and the grid | 2016

Software Provisioning Inside a Secure Environment as Docker Containers Using Stroll File-System

Abdulrahman Azab; Diana Domanska

TSD (Tjenester for Sensitive Data), is an isolated infrastructure for storing and processing sensitive research data, e.g. human patient genomics data. Due to the isolation of the TSD, it is not possible to install software in the traditional fashion. Docker containers is a platform implementing lightweight virtualization technology for applying the build-once-run-anyware approach in software packaging and sharing. This paper describes our experience at USIT (The University Centre of Information Technology) at the University of Oslo With Docker containers as a solution for installing and running software packages that require downloading of dependencies and binaries during the installation, inside a secure isolated infrastructure. Using Docker containers made it possible to package software packages as Docker images and run them smoothly inside our secure system, TSD. The paper describes Docker as a technology, its benefits and weaknesses in terms of security, demonstrates our experience with a use case for installing and running the Galaxy bioinformatics portal as a Docker container inside the TSD, and investigates the use of Stroll file-system as a proxy between Galaxy portal and the HPC cluster.

Collaboration


Dive into the Abdulrahman Azab's collaboration.

Top Co-Authors

Avatar

Hein Meling

University of Stavanger

View shared research outputs
Top Co-Authors

Avatar

Eivind Hovig

Oslo University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Vodák

Oslo University Hospital

View shared research outputs
Top Co-Authors

Avatar

Finn Drabløs

Norwegian University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge