Mohammad H. Alshayeji
Kuwait University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohammad H. Alshayeji.
international conference on advances in computing, control, and telecommunication technologies | 2010
Mohammad H. Alshayeji; Sam Rajesh M.D; Manal Alsarraf; Reem Alsuwaid
In today’s web based IT environment response time and availability are of major concerns, a few seconds delay in response time or non-availability of a resource can significantly impact customer satisfaction. Replication provides better performance and higher availability by maintaining multiple copies of data called replicas at various strategic locations. An important issue that must be addressed before replication is performed is where to place replicas. Several replica placement algorithms have been proposed and each has its own advantages and disadvantages. In this paper we study the various replica placement algorithms with respect to content delivery networks. Our main contribution in this paper is to compare popular replica placement algorithms and provide general scenarios for content delivery networks with proposed suitable algorithm for each of the scenarios. The scenarios can be a reference point for system designer for choosing a right replica placement algorithm for content delivery networks.
Applied Soft Computing | 2016
Sa'ed Abed; Suood Abdulaziz Al-Roomi; Mohammad H. Alshayeji
Display Omitted Proposing a work that is competitive with state-of-art optic disc detection methods.Describing a novel pre-processing method that improves optic disc detection accuracy.The novel use of four swarm intelligence algorithms (artificial bee colony, particle swarm optimization, bat algorithm, and cuckoo search) for optic disc detection.Providing accuracy, consistency and speed comparison between five swarm algorithms.Providing high performance parameters for swarm intelligence algorithms for optic disc detection and performance study on each parameter and its effect on the accuracy. Diabetic retinopathy affects the vision of a significant fraction of the population worldwide. Retinal fundus images are used to detect the condition before vision loss develops to enable medical interventions. Optic disc detection is an essential step for the automatic detection of the disease. Several techniques have been introduced in the literature to detect the optic disc with different performance characteristics such as speed, accuracy and consistency. For optic disc detection, a nature-inspired algorithm called swarm intelligence has been shown to have clear superiority in terms of speed and accuracy compared to traditional detection algorithms. We therefore further investigated and compared several swarm intelligence techniques. Our study focused on five popular swarm intelligence algorithms: artificial bee colony, particle swarm optimization, bat algorithm, cuckoo search and firefly algorithm. This work also featured a novel pre-processing scheme that enhances the detection accuracy of the swarm techniques by making the optic disc region the highest grayscale value in the image. The pre-processing involves multiple stages of background subtraction, median filtering and mean filtering and is named Background Subtraction-based Optic Disc Detection (BSODD). The best result was obtained by combining our pre-processing technique, firefly algorithm and the parameters used for the algorithm. The obtained accuracy was superior to the other tested algorithms and published results in the literature. The accuracy of the firefly algorithm was 100%, 100%, 98.82% and 95% when using the DRIVE, DiaRetDB1, DMED and STARE databases, respectively.
International Journal of Embedded Systems | 2017
Mohammad Al-Rousan; Elham AL-Shara; Yaser Jararweh; Mohammad H. Alshayeji
In this paper, a cloudlet-based approach for a new ad hoc model for mobile-based cloud computing is proposed. The performance of the model is evaluated using destinationsequenced distance-vector (DSDV) for routing protocol and random way point (RWP) mechanism for mobility fashion. The important parameters used to evaluate the model are endto-end (e2e) packet delay, system scalability and mobility management. The performance of the model has been studied using various workload sizes that are offloaded to cloudlets and for different node speeds. The variation of hand-off delay as well as workload size gives significant impact on the e2e delay results. Even with the maximum hand-off delay, passing various workloads through multiple cloudlets is still quicker than using an enterprise cloud unless offloading small workload size.
Computers & Electrical Engineering | 2018
Mohammad H. Alshayeji; Mohammad Al-Rousan; Hanem Ellethy; Sa'ed Abed
Abstract In this work, an efficient multiple sclerosis (MS) segmentation technique is proposed to simplify pre-processing steps and diminish processing time using heterogeneous single-channel magnetic resonance imaging (MRI). A spatial-filtering image mapping, histogram reference image, and histogram matching techniques are effectively applied to possess a local threshold per image using the global threshold algorithm. Feature extraction is performed using mathematical and morphological operations, and a multilayer feed-forward neural network (MLFFNN) is used identify multiple sclerosis’ tissues. Fluid-attenuated inversion recovery (FLAIR) series are used to integrate a faster system while maintaining reliability and accuracy. A sagittal (SAG) FLAIR-based system is proposed for the first time in MS detection systems, which reduces the number of utilized images, and decreases the processing time by nearly one-third. Our detection system provided a significant recognition rate of up to 98.5%. Moreover, a relatively high dice coefficient (DC) value (0.71 ± 0.18) was observed upon testing new images.
international conference software and computer applications | 2017
Sa'ed Abed; Mohammad H. Alshayeji; Zahra'a Abdullah; Zainab AlSaeed
As the industry of Network of Chips (NoCs) evolves, the reliability and performance of these systems are becoming more critical requirement. The fault tolerance issue is an essential factor that has a direct impact on the reliability of the system. Many techniques were developed to boost the fault tolerance capability of NoCs. This is either implemented on the routing algorithm level or architecture level. This paper analyzes previous work that enhances the fault tolerance by modifying the router architecture. The model of Partial Virtual Sharing (PVS) architecture was modified to improve the fault tolerance capability. We proposed a technique to implement fault tolerance at the input unit of the router architecture. Additional enhancements to implement fault tolerance at the output unit of the router was proposed and implemented too. The reliability of the proposed design was evaluated and compared based on the Mean Time Between Failure (MTBF) metric. The proposed design had shown a remarkable improvement of 263.2% over existing approaches.
Security and Communication Networks | 2016
Mohammad H. Alshayeji; Suood Abdulaziz Al-Roomi; Sa'ed Abed
In this paper, we propose a novel least significant bit embedding approach that capitalizes on the skewed distribution of letter and word frequencies to achieve higher image capacity, quality, and security. We initially conduct a study that involves all of the character frequencies using a data set of 14.245 billion characters. Huffman coding for each character is generated on the basis of its probability of occurrence. Furthermore, the top 100 000 most frequent words are transformed into a smaller ciphertext that has a lower cost. Our work demonstrates that recognizing characters and words on the basis of their frequency patterns and prioritizing them accordingly has a greater prospect of reducing the overall cost of embedding. The proposed scheme significantly outperforms Lempel–Ziv–Welch compression with an average of 45% fewer embedded bits. Moreover, the image quality is improved by a mean peak signal-to-noise ratio value of 6.9%. The proposed method also establishes a security embedding by proposing a novel shuffling algorithm. Copyright
Iet Computers and Digital Techniques | 2016
Sa'ed Abed; Mohammad H. Alshayeji; Sari Sultan; Nesreen Mohammad
An effective design of cache memory is an important aspect in computer architecture to improve the system performance. In this work, we study the effect of reducing the cache comparisons to map the cache address on the performance experimentally and analytically. Cache miss penalties have drastic impact on the systems’ performance. To overcome this, we propose a novel tag access scheme, which uses a partial comparison unit called n-bit comparator and use multiple search methods inside the data cache to improve the cache performance by reducing cache access time. Partial tag comparison (PTC) enables the cache to compare the tag in multi-stage techniques starting with the least significant bits (LSBs). Thus, useless tag comparison and number of tag bits being compared can be effectively reduced, hence reaching the requested tag is faster and the cache hit time is reduced. Simulation results show that the proposed approach outperforms conventional mapping techniques. The PTC technique improves the hit time in 2-bank and 4-bank fully associative caches by 70–96% and 67–88% over a cache with full tag comparison. Moreover, the proposed technique provides the minimum hit time when using a hash searching method rather than other searching methods: linear and binary.
Medical & Biological Engineering & Computing | 2017
Mohammad H. Alshayeji; Suood Abdulaziz Al-Roomi; Sa'ed Abed
Archive | 2015
Mohammad H. Alshayeji; Mohammad Al-Rousan; Dunya T. Hassoun
International Journal of Computer and Electrical Engineering | 2018
Mohammad H. Alshayeji; Mohammad Al-Rousan; Eman Yossef; Hanem Ellethy