Sahalu B. Junaidu
King Fahd University of Petroleum and Minerals
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sahalu B. Junaidu.
Concurrency and Computation: Practice and Experience | 1999
Hans-Wolfgang Loidl; Philip W. Trinder; Kevin Hammond; Sahalu B. Junaidu; Richard G. Morgan; Simon L. Peyton Jones
We investigate the claim that functional languages offer low-cost parallelism in the context of symbolic programs on modest parallel architectures. In our investigation we present the first comparative study of the construction of large applications in a parallel functional language, in our case in Glasgow Parallel Haskell (GPH). The applications cover a range of application areas, use several parallel programming paradigms, and are measured on two very different parallel architectures. n nOn the applications level the most significant result is that we are able to achieve modest wall-clock speedups (between factors of 2 and 10) over the optimised sequential versions for all but one of the programs. Speedups are obtained even for programs that were not written with the intention of being parallelised. These gains are achieved with a relatively small programmer-effort. One reason for the relative ease of parallelisation is the use of evaluation strategies, a new parallel programming technique that separates the algorithm from the co-ordination of parallel behaviour. n nOn the language level we show that the combination of lazy and parallel evaluation is useful for achieving a high level of abstraction. In particular we can describe top-level parallelism, and also preserve module abstraction by describing parallelism over the data structures provided at the module interface (‘data-oriented parallelism’). Furthermore, we find that the determinism of the language is helpful, as is the largely implicit nature of parallelism in GPH. Copyright
technical symposium on computer science education | 2005
M. R. K. Krishna Rao; Sahalu B. Junaidu; Talal Maghrabi; Muhammad Shafique; M. Ahmed; Kanaan A. Faisal
Our department has recently revisited its computer science program in the light of IEEE/ACM Computing Curricula 2001 (CC2001) recommendations, taking into consideration the ABETs Criteria for Accrediting Computing programs (CAC 04-05). The effort resulted in a revised curriculum. This paper presents the different decisions we made with regard to the curriculum orientation, knowledge units coverage, transition management, and monitoring and assessment. The paper also sheds some light on challenges faced. Tables provided in the paper show that the curriculum successfully implements CC2001 recommendations while satisfying the CAC 04-05.
acs ieee international conference on computer systems and applications | 2001
A. Badhusha; Seyed M. Buhari; Sahalu B. Junaidu; M. Saleem
The field of information technology is growing very quickly, and so there are more and more regular updates. At the same time, computer viruses are also on the increase, and people are now even complaining that attacks on computers are mostly from within the intranet structure. So, if we are able to update the signature files of the anti-virus software that exists on the various computers that are used by the various members of the locality concerned, we can prevent the problems of viruses to some extent. In order to do this, nowadays, system administrators send reminders to the people concerned, or else the computers have to be set to update every few days or so. These systems have their own drawbacks. To avoid this hazard, we provide an option for active packet-oriented automatic signature file updating.
PLOS ONE | 2017
Hajara Idris; Absalom E. Ezugwu; Sahalu B. Junaidu; Aderemi Oluyinka Adewumi
The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time.
Journal of intelligent systems | 2017
Absalom E. Ezugwu; Nneoma A. Okoroafor; Seyed M. Buhari; Marc Frîncu; Sahalu B. Junaidu
Abstract The operational efficacy of the grid computing system depends mainly on the proper management of grid resources to carry out the various jobs that users send to the grid. The paper explores an alternative way of efficiently searching, matching, and allocating distributed grid resources to jobs in such a way that the resource demand of each grid user job is met. A proposal of resource selection method that is based on the concept of genetic algorithm (GA) using populations based on multisets is made. Furthermore, the paper presents a hybrid GA-based scheduling framework that efficiently searches for the best available resources for user jobs in a typical grid computing environment. For the proposed resource allocation method, additional mechanisms (populations based on multiset and adaptive matching) are introduced into the GA components to enhance their search capability in a large problem space. Empirical study is presented in order to demonstrate the importance of operator improvement on traditional GA. The preliminary performance results show that the proposed introduction of an additional operator fine-tuning is efficient in both speed and accuracy and can keep up with high job arrival rates.
Multiagent and Grid Systems | 2016
Absalom E. Ezugwu; Marc Frîncu; Sahalu B. Junaidu
An important factor that needs to be considered by every Grid application end-user and systems (such as schedulers or mediators), during Grid resource selection and mapping to applications, is the performance capacity of hardware resources attached to the Grid, and made available through its Virtual Organizations. In this paper, we represent the performance of a computational Grid as a regression model that can be used to fine-tune the selection of suitable Grid resources. A study on the performance of distributed systems with respect to particular variations in parameters is presented. Our objective is to use a measurement-based evaluation technique to characterize the specific performance contribution of the individual Grid resource configurations. In the process, we identify the key primary parameters (or factors) that should be considered when selecting and allocating a computational node for user application execution.
Proceedings of the 2009 conference on Information Science, Technology and Applications | 2009
Mohammad Tanvir Parvez; Syed Usama Idrees; Sahalu B. Junaidu; Abdul Rahim Naseer
With the advent of multi-core architectures, there arises a need for comparative evaluations of the performance of well-understood parallel programs. This is because, it is necessary to gain an insight into the potential advantages of the available platforms, namely multi-core and multi-processors, to decide which platform to use for a particular application. The need for this insight is due to the different nature and requirements (like the divisions of work, intercommunications etc.) of the different parallel algorithms. In this paper, we evaluate the performance of the parallel implementations of three well known algorithms on multi-computer, multi-core and hyper-threading architectures. Parallelization of the programs was done using MPICH2 and OpenMP. We provide comparative evaluation of the run-time behaviors of the parallel programs using three performance metrics: average run-time, I/O and communication overhead. The main experimental result demonstrates the superiority of multicore architecture over multi-computer and hyper-threading architectures for running threaded applications for the same number of cores/processors. We also investigate the effect of parallel I/O in the performance of the programs in multi-computer platform.
Information Sciences | 2002
Sahalu B. Junaidu; Philip W. Trinder
Naira is a compiler for Haskell, written in Glasgow parallel Haskell. It exhibits modest, but irregular, parallelism that is determined by properties of the program being compiled, e.g. the complexity of the types and of the pattern matching. We report four experiments into Nairas parallel behaviour using a set of realistic inputs: namely the 18 Haskell modules of Naira itself. The issues investigated are: • Does increasing input size improve sequential efficiency and speedup? • To what extent do high communications latencies reduce average parallelism and speedup? • Does migrating running threads between processors improve average parallelism and speedup at all latencies?
The Turkish Online Journal of Distance Education | 2008
Sahalu B. Junaidu
Archive | 1998
Sahalu B. Junaidu