Kenan M Matawie
University of Western Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenan M Matawie.
Total Quality Management & Business Excellence | 2012
Stanislaus Roque Lobo; Kenan M Matawie; Premaratne Samaranayake
This paper provides an assessment of the quality management capabilities of the manufacturing industries in the Western Sydney Region of New South Wales, Australia based on a conceptual framework of the Quality Management Assessment Framework (QMAF) model which incorporates Information Communication Technology as an enabler. Survey data collected from a range of small, medium and large manufacturing organisations were used in this assessment. The results established the general reliability of the QMAF model and provided comprehensive profile of the quality management capabilities, highlighting deficiencies in industries in the Western Sydney Region. The main limitation of this research was the small sample size, even after conducting follow-up mail outs to the whole population of 1236 organisations. The analysis provides valuable guidelines to the managers of the participating organisations aiming at bettering their business processes in many ways, including benchmarking their performance against the best case scores of the QMAF, a precursor to facilitating the determination of insights which could be used to promote quality improvement programmes.
Applied Economics | 2010
Albert Assaf; Kenan M Matawie
This article analyses the efficiency of health care foodservice operations and its determinants using a Data Envelopment Analysis (DEA) bootstrapping approach. The purpose of using the bootstrapping approach is two-fold: first, to obtain the bias corrected estimates and the confidence intervals of DEA-efficiency scores and second, to overcome the correlation problem of DEA-efficiency scores and to provide consistent inferences in explaining the determinants of health care foodservice efficiency. The approach was implemented on a sample of 89 health care foodservice operations. The results showed the presence of inefficiency in the sample, with an average efficiency level of 72.6%. Further, the results from analysing the determinants of health care foodservice operations provided policy implication regarding the factors that might improve the efficiency of health care foodservice operations.
Applied Economics Letters | 2010
Albert Assaf; Kenan M Matawie
The major aim of this article is to apply the bootstrapping methodology to the estimation of the metafrotnier model. The article has two parts. The first part deals with the technical details of the metafrontier model, and the second presents the application of the model using cross-sectional input/output data on health care foodservice operations. The results obtained by bootstrapping the metafrontier model are presented and discussed.
Journal of Applied Statistics | 2010
Kenan M Matawie; Albert Assaf
The significant impact of health foodservice operations on the total operational cost of the hospital sector has increased the need to improve the efficiency of these operations. Although important studies on the performance of foodservice operations have been published in various academic journals and industrial reports, the findings and implications remain simple and limited in scope and methodology. This paper investigates two popular methodologies in the efficiency literature: Bayesian “stochastic frontier analysis” (SFA) and “data envelopment analysis” (DEA). The paper discusses the statistical advantages of the Bayesian SFA and compares it with an extended DEA model. The results from a sample of 101 hospital foodservice operations show the existence of inefficiency in the sample, and indicate significant differences between the average efficiency generated by the Bayesian SFA and DEA models. The ranking of efficiency is, however, statistically independent of the methodologies.
Journal of Network and Computer Applications | 2017
Rekha Nachiappan; Bahman Javadi; Rodrigo N. Calheiros; Kenan M Matawie
Abstract Cloud storage systems are now mature enough to handle a massive volume of heterogeneous and rapidly changing data, which is known as Big Data. However, failures are inevitable in cloud storage systems as they are composed of large scale hardware components. Improving fault tolerance in cloud storage systems for Big Data applications is a significant challenge. Replication and Erasure coding are the most important data reliability techniques employed in cloud storage systems. Both techniques have their own trade-off in various parameters such as durability, availability, storage overhead, network bandwidth and traffic, energy consumption and recovery performance. This survey explores the challenges involved in employing both techniques in cloud storage systems for Big Data applications with respect to the aforementioned parameters. In this paper, we also introduce a conceptual hybrid technique to further improve reliability, latency, bandwidth usage, and storage efficiency of Big Data applications on cloud computing.
International Journal of Contemporary Hospitality Management | 2008
Albert Assaf; Kenan M Matawie; Deborah Blackman
Purpose – The purpose of this paper is to overcome the problems surrounding the operational performance of health care foodservice systems and provide a comprehensive comparison and analysis of the performance of all the different types of foodservice systems. The paper seeks to show that research addressing the operational performance of health care foodservice systems is subjective and outdated.Design/methodology/approach – Discussion with foodservice managers, coupled with a review of the literature, was undertaken to determine the variables of operational performance in the different types of foodservice systems. Statistical analysis was then used to determine the areas of difference between the systems based on a sample of 90 hospital foodservice operations.Findings – Results showed significant differences between the systems with regard to critical variables such as labor, skill level of employees and size of the production area. However, no significant differences were found for other variables suc...
international performance computing and communications conference | 2013
Bahman Javadi; Kenan M Matawie; David P. Anderson
Volunteer computing systems are large-scale distributed systems with large number of heterogeneous and unreliable Internet-connected hosts. Volunteer computing resources are suitable mainly to run High-Throughput Computing (HTC) applications due to their unavailability rate and frequent churn. Although they provide Peta-scale computing power for many scientific projects across the globe, efficient usage of this platform for different types of applications still has not been investigated in depth. So, characterizing, analyzing and modeling such resources availability in volunteer computing is becoming essential and important for efficient application scheduling. In this paper, we focus on statistical modeling of volunteer resources, which exhibit non-random pattern in their availability time. The proposed models take into account the autocorrelation structure in subset of hosts whose availability has short/long-range dependence. We apply our methodology on real traces from the SETI@home project with more than 230,000 hosts. We show that Markovian arrival process can model the availability and unavailability intervals of volunteer resources with a reasonable to excellent level of accuracy.
bioinformatics and biomedicine | 2013
Arshad Muhammad Mehar; Kenan M Matawie; Anthony J. Maeder
Partitioning data into a finite number of k homogenous and separate clusters (groups) without use of prior knowledge is carried out by some unsupervised partitioning algorithm like the k-means clustering algorithm. To evaluate these resultant clusters for finding optimal number of clusters, properties such as cluster density, size, shape and separability are typically examined by some cluster validation methods. Mainly the aim of clustering analysis is to find the overall compactness of the clustering solution, for example variance within cluster should be a minimum and separation between the clusters should be a maximum. In this study, for k-means clustering we have developed a new method to find an optimal value of k number of clusters, using the features and variables inherited from datasets. The new proposed method is based on comparison of movement of objects forward/back from k to k+1 and k+1 to k set of clusters to find the joint probability, which is different from the other methods and indexes that are based on the distance. The performance of this method is also compared with some existing methods through two simulated datasets.
ieee international conference on quality and reliability | 2011
Premaratne Samaranayake; Kenan M Matawie; Rajalingam Rajayogan
In recent times, there seem to have increased research activity in relation to developing appropriate models to assess the road traffic safety through collision prediction. However, a considerable amount of work has been carried out limited to safety at highway-railway grade crossings. The objective of this paper is to propose a conceptual framework for rail safety evaluation at railway-highway grade crossings, using suitable safety risk scores (called ‘Safety Risk Index’), based on a combination of both accident frequency and consequences. The Safety Risk Index (SRI) is a simple composite index, which can measure, compare and rank safety levels at different risk situations and locations including worst and most dangerous locations. This approach facilitates the assessment of the safety risks at grade crossings; and identifies, ranks and priorities the worst performing crossings or the problematic crossings (Black-spots).
Bellman Prize in Mathematical Biosciences | 1992
Jiao Zhaorong; Kenan M Matawie; C. A. McGilchrist
Tests for biotyping isolates give a result that is classified as either positive or negative, indicative of growth or nongrowth of bacteria. The reproducibility of such tests is measured by the number of discordances in replicates of the same measurement. In this analysis the probability distribution of the number of discordances is estimated for each of several tests in the presence of possible random between-laboratory effects as well as random laboratory-test interactions, which are also estimated.