Mwaffaq Otoom
Yarmouk University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mwaffaq Otoom.
high performance embedded architectures and compilers | 2015
Mwaffaq Otoom; Pedro Trancoso; Hisham M. Almasaeid; Mohammad A. Alzubaidi
The design for continuous computer performance is increasingly becoming limited by the exponential increase in the power consumption. In order to improve the energy efficiency of multicore chips, we propose a novel global power management technique. The goal of the technique is to deliver the maximum performance at a fixed power budget, without significant overhead. To tackle the exponential complexity of the power management for multiple cores, we apply a Reinforcement Learning technique, Q-learning, at the core level and then use a chip-level intelligent controller to optimize the power distribution among all cores. The power assignment adapts dynamically at runtime depending on the needs of the applications. The technique was evaluated using the PARSEC benchmark suite on a full system simulator. The experimental results show, in average, that with the proposed technique the overall performance is increased by 39% for a fixed power budget while the EDP is improved by 28%, compared to the non-DVFS baseline implementation.
international conference on hardware/software codesign and system synthesis | 2008
Mwaffaq Otoom; JoAnn M. Paul
We utilize application trends analysis, focused on webpage content, in order to examine the design of mobile computers more holistically. We find that both Internet bandwidth and processing local to the computing device is being wasted by re-transmission of formatting data. By taking this broader view, and separating Macromedia Flash content into raw data and its packaging, we show that performance can be increased by 84%, power consumption can be decreased by 71%, and communications bandwidth can be saved by an order of magnitude.
International Journal of Parallel Programming | 2012
Mwaffaq Otoom; JoAnn M. Paul
With the potential for tens to hundreds of processing elements on future single chip multicore designs also comes the potential to execute a wider variety of input streams, or workloads. At the same time, the trend is for single users to utilize an entire single chip multicore computer. A central challenge for these computers is how to model and identify persistent changes in the input stream, or workload modes. Computer architects often model single program phases as Markov chains. We define workload modes and analyze and evaluate two modeling techniques, a Workload Classification Model (WCM) and a Hidden Markov Model (HMM). We include experimentation on a cell phone example, illustrating how WCM is, on average, 34 times more time efficient and 83% more space efficient than HMM, while improving overall performance by an average of 191% and being, on average, 56% more energy efficient. We found that even sub-optimal use of WCM can outperform HMM, further supporting the need for design time workload models. Our main contribution is to show how the design of single user multicore architecture to models of workloads that arise from single user usage patterns will be necessary as the complexity of applications and architectures grows. Thus, we advocate Workload Specific Processors as a new means of orienting single-user chip heterogeneous multiprocessors.
Advances in Computers | 2009
JoAnn M. Paul; Mwaffaq Otoom; Marc Somers; Sean M. Pieper; Michael J. Schulte
Abstract Since the dawn of computing, performance has been the dominant factor driving innovation. The underling hypothesis is that there is always more computation that can be done if the computer would be made faster in performing some application or set of applications. Latency and throughput are the two metrics commonly used to model performance. Lower latency for a given application means that the application will execute faster from beginning to end, while higher throughput for a set of applications means that the set will execute faster, again, from beginning to end. Computer architects and designers focus on techniques that reduce latency and increase throughput at all levels of computer design, from the instruction level to the multiapplication level. In this chapter we illustrate how the applications and architectures of emerging mobile, personal computer devices call this focus into question. A sea change is occurring in performance evaluation which requires re-evaluation of computer performance from the perspective of the end user. We develop a taxonomy and include examples to motivate future directions for computer evaluation and design.
international conference on hardware/software codesign and system synthesis | 2011
Mwaffaq Otoom; JoAnn M. Paul
Chip Heterogeneous Multiprocessors (CHMs) are increasingly used to execute multichannel, heterogeneous workloads, often in the service of single users. Multichannel inputs can be processed at different rates and in a variety of combinations. We show that performance evaluation of the CHMs that process multichannel workloads requires a new performance metric, capacity, which we introduce in this paper. We show how capacity is a successor to throughput, through an automobile production analogy. We include experimental results to illustrate the form and usefulness of the new metric as well as contrast it with Pareto optimization.
Computers in Biology and Medicine | 2016
Mohammad A. Alzubaidi; Mwaffaq Otoom; Abdel-Karim Al-Tamimi
The production and distribution of videos and animations on gaming and self-authoring websites are booming. However, given this rise in self-authoring, there is increased concern for the health and safety of people who suffer from a neurological disorder called photosensitivity or photosensitive epilepsy. These people can suffer seizures from viewing video with hazardous content. This paper presents a spatiotemporal pattern detection algorithm that can detect hazardous content in streaming video in real time. A tool is developed for producing test videos with hazardous content, and then those test videos are used to evaluate the proposed algorithm, as well as an existing post-processing tool that is currently being used for detecting such patterns. To perform the detection in real time, the proposed algorithm was implemented on a dual core processor, using a pipelined/parallel software architecture. Results indicate that the proposed method provides better detection performance, allowing for the masking of seizure inducing patterns in real time.
Journal of Medical Systems | 2015
Mwaffaq Otoom; Hussam Alshraideh; Hisham M. Almasaeid; Diego López-de-Ipiña; José Bravo
Diabetes is considered a chronic disease that incurs various types of cost to the world. One major challenge in the control of Diabetes is the real time determination of the proper insulin dose. In this paper, we develop a prototype for real time blood sugar control, integrated with the cloud. Our system controls blood sugar by observing the blood sugar level and accordingly determining the appropriate insulin dose based on patient’s historical data, all in real time and automatically. To determine the appropriate insulin dose, we propose two statistical models for modeling blood sugar profiles, namely ARIMA and Markov-based model. Our experiment used to evaluate the performance of the two models shows that the ARIMA model outperforms the Markov-based model in terms of prediction accuracy.
IEEE Transactions on Computers | 2015
Mwaffaq Otoom; JoAnn M. Paul
We develop a new metric, Capacity, which evaluates the performance of chip heterogeneous multiprocessors (CHMs) that process multiple, variable heterogeneous workloads, which we refer to as demands. In contrast to single-valued metrics such as throughput, Capacity is a shape, a surface in n-dimensions and a curve in two-dimensions. We show how Capacity is a successor to throughput, through an automobile production analogy, thus motivating how multiprocessors should be viewed as plants, rather than production pipelines. For the analysis of Capacity curve shapes, we propose the development of a Demand Characterization Method (DCM) to be used in conjunction with the Capacity metric to identify optimal CHM designs for specific demands. We include experimental results finding that Capacity is a better predictor of optimal designs than single-valued metrics.
International Journal of Technology Enhanced Learning | 2018
Mohammad A. Alzubaidi; Mwaffaq Otoom
Class participation plays a vital role in the learning process during classroom instruction. Deaf students often have difficulty participating in class discussions. Several studies have shown that deaf people are better able to interpret speech when they can view the lip movements of a speaker. This paper proposes an assistive device, called the Discussion-Facilitator, which aims to enable deaf students to better participate in classroom discussions. This is done by combining the speech-recognised text of the lecture with a live video stream that is zoomed-in on the lecturers face. The student is also able to write a text response and play it on loud speakers. Nine deaf students conducted a usability test. The results show that viewing lip movements combined with the speech-recognised text of the lecturer contributed to the understanding of the lecturers speech, and that our prototype makes the engagement of deaf students in classroom discussion more effective.
Assistive Technology | 2018
Mwaffaq Otoom; Mohammad A. Alzubaidi
ABSTRACT Sign language can be used to facilitate communication with and between deaf or hard of hearing (Deaf/HH). With the advent of video streaming applications in smart TVs and mobile devices, it is now possible to use sign language to communicate over worldwide networks. In this article, we develop a prototype assistive device for real-time speech-to-sign translation. The proposed device aims at enabling Deaf/HH people to access and understand materials delivered in mobile streaming videos through the applications of pipelined and parallel processing for real-time translation, and the application of eye-tracking based user-satisfaction detection to support dynamic learning to improve speech-to-signing translation. We conduct two experiments to evaluate the performance and usability of the proposed assistive device. Nine deaf people participated in these experiments. Our real-time performance evaluation shows the addition of viewer’s attention-based feedback reduced translation error rates by 16% (per the sign error rate [SER] metric) and increased translation accuracy by 5.4% (per the bilingual evaluation understudy [BLEU] metric) when compared to a non-real-time baseline system without these features. The usability study results indicate that our assistive device was also pleasant and satisfying to deaf users, and it may contribute to greater engagement of deaf people in day-to-day activities.