Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nasro Min-Allah is active.

Publication


Featured researches published by Nasro Min-Allah.


parallel computing | 2013

A survey on resource allocation in high performance distributed computing systems

Hameed Hussain; Saif Ur Rehman Malik; Abdul Hameed; Samee Ullah Khan; Gage Bickler; Nasro Min-Allah; Muhammad Bilal Qureshi; Limin Zhang; Wang Yong-Ji; Nasir Ghani; Joanna Kolodziej; Albert Y. Zomaya; Cheng Zhong Xu; Pavan Balaji; Abhinav Vishnu; Fredric Pinel; Johnatan E. Pecero; Dzmitry Kliazovich; Pascal Bouvry; Hongxiang Li; Lizhe Wang; Dan Chen; Ammar Rayes

Classification of high performance computing (HPC) systems is provided.Current HPC paradigms and industrial application suites are discussed.State of the art in HPC resource allocation is reported.Hardware and software solutions are discussed for optimized HPC systems. An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.


Security and Communication Networks | 2013

Comparative study of trust and reputation systems for wireless sensor networks

Osman Khalid; Samee Ullah Khan; Sajjad Ahmad Madani; Khizar Hayat; Majid Iqbal Khan; Nasro Min-Allah; Joanna Kolodziej; Lizhe Wang; Sherali Zeadally; Dan Chen

Wireless sensor networks (WSNs) are emerging as useful technology for information extraction from the surrounding environment by using numerous small-sized sensor nodes that are mostly deployed in sensitive, unattended, and (sometimes) hostile territories. Traditional cryptographic approaches are widely used to provide security in WSN. However, because of unattended and insecure deployment, a sensor node may be physically captured by an adversary who may acquire the underlying secret keys, or a subset thereof, to access the critical data and/or other nodes present in the network. Moreover, a node may not properly operate because of insufficient resources or problems in the network link. In recent years, the basic ideas of trust and reputation have been applied to WSNs to monitor the changing behaviors of nodes in a network. Several trust and reputation monitoring (TRM) systems have been proposed, to integrate the concepts of trust in networks as an additional security measure, and various surveys are conducted on the aforementioned system. However, the existing surveys lack a comprehensive discussion on trust application specific to the WSNs. This survey attempts to provide a thorough understanding of trust and reputation as well as their applications in the context of WSNs. The survey discusses the components required to build a TRM and the trust computation phases explained with a study of various security attacks. The study investigates the recent advances in TRMs and includes a concise comparison of various TRMs. Finally, a discussion on open issues and challenges in the implementation of trust-based systems is also presented. Copyright


Concurrency and Computation: Practice and Experience | 2013

Quantitative comparisons of the state-of-the-art data center architectures

Kashif Bilal; Samee Ullah Khan; Limin Zhang; Hongxiang Li; Khizar Hayat; Sajjad Ahmad Madani; Nasro Min-Allah; Lizhe Wang; Dan Chen; Majid I. Iqbal; Cheng Zhong Xu; Albert Y. Zomaya

Data centers are experiencing a remarkable growth in the number of interconnected servers. Being one of the foremost data center design concerns, network infrastructure plays a pivotal role in the initial capital investment and ascertaining the performance parameters for the data center. Legacy data center network (DCN) infrastructure lacks the inherent capability to meet the data centers growth trend and aggregate bandwidth demands. Deployment of even the highest‐end enterprise network equipment only delivers around 50% of the aggregate bandwidth at the edge of network. The vital challenges faced by the legacy DCN architecture trigger the need for new DCN architectures, to accommodate the growing demands of the ‘cloud computing’ paradigm. We have implemented and simulated the state of the art DCN models in this paper, namely: (a) legacy DCN architecture, (b) switch‐based, and (c) hybrid models, and compared their effectiveness by monitoring the network: (a) throughput and (b) average packet delay. The presented analysis may be perceived as a background benchmarking study for the further research on the simulation and implementation of the DCN‐customized topologies and customized addressing protocols in the large‐scale data centers. We have performed extensive simulations under various network traffic patterns to ascertain the strengths and inadequacies of the different DCN architectures. Moreover, we provide a firm foundation for further research and enhancement in DCN architectures. Copyright


Cluster Computing | 2013

A survey on Green communications using Adaptive Link Rate

Kashif Bilal; Samee Ullah Khan; Sajjad Ahmad Madani; Khizar Hayat; Majid Iqbal Khan; Nasro Min-Allah; Joanna Kolodziej; Lizhe Wang; Sherali Zeadally; Dan Chen

The Information and Communication Technology sector is considered to be a major consumer of energy and has become an active participant in Green House Gas emissions. Lots of efforts have been devoted to make network infrastructure and network protocols power-aware and green. Among these efforts, Adaptive Link Rate (ALR) is one of the most widely discussed approaches. This survey highlights the most recent ALR approaches with a brief taxonomy and the state of the art.


international conference on cloud computing | 2012

XenPump: A New Method to Mitigate Timing Channel in Cloud Computing

Jingzheng Wu; Liping Ding; Yuqi Lin; Nasro Min-Allah; Yongji Wang

Cloud computing security has become the focus in information security, where much attention has been drawn to the user privacy leakage. Although isolation and some other security policies have been provided to protect the security of cloud computing, confidential information can be still stolen by timing channels without being detected. In this paper, a new method named XenPump is presented aiming to mitigate the threat of the timing channels by adding latency. XenPump is designed as a module located in hypervisor, monitoring the hypercalls used by the timing channels and adding latencies to lower the threat into an acceptable level. The prototype of XenPump has been implemented in Xen virtualization platform, and the performance is evaluated by the shared memory based timing channel. The experiment results show that XenPump can mitigate the threat of the timing channel by interrupting both the capacity and transmission accuracy. It is believed that after small extension, XenPump can mitigate the incoming timing channels.


Cluster Computing | 2013

Hierarchical genetic-based grid scheduling with energy optimization

Joanna Kolodziej; Samee Ullah Khan; Lizhe Wang; Aleksander Byrski; Nasro Min-Allah; Sajjad Ahmad Madani

An optimization of power and energy consumptions is the important concern for a design of modern-day and future computing and communication systems. Various techniques and high performance technologies have been investigated and developed for an efficient management of such systems. All these technologies should be able to provide good performance and to cope under an increased workload demand in the dynamic environments such as Computational Grids (CGs), clusters and clouds.In this paper we approach the independent batch scheduling in CG as a bi-objective minimization problem with makespan and energy consumption as the scheduling criteria. We use the Dynamic Voltage Scaling (DVS) methodology for scaling and possible reduction of cumulative power energy utilized by the system resources. We develop two implementations of Hierarchical Genetic Strategy-based grid scheduler (Green-HGS-Sched) with elitist and struggle replacement mechanisms. The proposed algorithms were empirically evaluated versus single-population Genetic Algorithms (GAs) and Island GA models for four CG size scenarios in static and dynamic modes. The simulation results show that proposed scheduling methodologies fairly reduce the energy usage and can be easily adapted to the dynamically changing grid states and various scheduling scenarios.


Journal of Parallel and Distributed Computing | 2012

Power efficient rate monotonic scheduling for multi-core systems

Nasro Min-Allah; Hameed Hussain; Samee Ullah Khan; Albert Y. Zomaya

More computational power is offered by current real-time systems to cope with CPU intensive applications. However, this facility comes at the price of more energy consumption and eventually higher heat dissipation. As a remedy, these issues are being encountered by adjusting the system speed on the fly so that application deadlines are respected and also, the overall system energy consumption is reduced. In addition, the current state of the art of multi-core technology opens further research opportunities for energy reduction through power efficient scheduling. However, the multi-core front is relatively unexplored from the perspective of task scheduling. To the best of our knowledge, very little is known as of yet to integrate power efficiency component into real-time scheduling theory that is tailored for multi-core platforms. In this paper, we first propose a technique to find the lowest core speed to schedule individual tasks. The proposed technique is experimentally evaluated and the results show the supremacy of our test over the existing counterparts. Following that, the lightest task shifting policy is adapted for balancing core utilization, which is utilized to determine the uniform system speed for a given task set. The aforementioned guarantees that: (i) all the tasks fulfill their deadlines and (ii) the overall system energy consumption is reduced.


IEEE Communications Letters | 2012

Network Survivability for Multiple Probabilistic Failures

Oscar Diaz; Feng Xu; Nasro Min-Allah; Mahmoud A. Khodeir; Min Peng; Samee Ullah Khan; Nasir Ghani

Network service recovery from multiple correlated failures is a major concern given the increased level of infrastructure vulnerability to natural disasters, massive power failures, and malicious attacks. To properly address this problem, a novel path protection solution is proposed to jointly incorporate traffic engineering and risk minimization objectives. The framework assumes probabilistic link failures and is evaluated against some existing multi-failure recovery schemes using network simulation.


Security and Communication Networks | 2014

C2Detector: a covert channel detection framework in cloud computing

Jingzheng Wu; Liping Ding; Yanjun Wu; Nasro Min-Allah; Samee Ullah Khan; Yongji Wang

Cloud computing is becoming increasingly popular because of the dynamic deployment of computing service. Another advantage of cloud is that data confidentiality is protected by the cloud provider with the virtualization technology. However, a covert channel can break the isolation of the virtualization platform and leak confidential information without letting it known by virtual machines. In this paper, the threat model of covert channels is analyzed. The channels are classified into three categories, and only the category that is new to cloud computing is concerned, for example, CPU load-based, cache-based, and shared memory-based covert channels. The covert channel scenario is modeled into an error-corrected four-state automaton, and two error-corrected algorithms are designed. A new detection framework termed C2Detector is presented. C2Detector includes a captor located in the hypervisor and a two-phase synthesis algorithm implemented as Markov and Bayesian detectors. A prototype of C2Detector is implemented on Xen hypervisor, and its performance of detecting the covert channels is demonstrated. The experiment results show that C2Detector can detect the three types of the covert channels with an acceptable false positive rate by using a pessimistic threshold. Moreover, C2Detector is a plug-in framework and can be easily extended. It is believed that new covert channels can be detected by C2Detector in the future. Copyright


The Journal of Supercomputing | 2012

A goal programming based energy efficient resource allocation in data centers

Samee Ullah Khan; Nasro Min-Allah

We study the multi-objective problem of mapping independent tasks onto a set of data center machines that simultaneously minimizes the energy consumption and response time (makespan) subject to the constraints of deadlines and architectural requirements. We propose an algorithm based on goal programming that effectively converges to the compromised Pareto optimal solution. Compared to other traditional multi-objective optimization techniques that require identification of the Pareto frontier, goal programming directly converges to the compromised solution. Such a property makes goal programming a very efficient multi-objective optimization technique. Moreover, simulation results show that the proposed technique achieves superior performance compared to the greedy and linear relaxation heuristics, and competitive performance relative to the optimal solution implemented in Linear Interactive and Discrete Optimizer (LINDO) for small-scale problems.

Collaboration


Dive into the Nasro Min-Allah's collaboration.

Top Co-Authors

Avatar

Samee Ullah Khan

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Nasir Ghani

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Sajjad Ahmad Madani

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Lizhe Wang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Joanna Kolodziej

University of Bielsko-Biała

View shared research outputs
Top Co-Authors

Avatar

Pascal Bouvry

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Wang Yong-Ji

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kashif Bilal

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge