Mohamed Shoukri
University of Western Ontario
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Shoukri.
International Journal of Nursing Studies | 2011
Jan Kottner; Laurent Audige; Stig Brorson; Allan Donner; Byron J. Gajewski; Asbjørn Hróbjartsson; Chris Roberts; Mohamed Shoukri; David L. Streiner
OBJECTIVE Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies. STUDY DESIGN AND SETTING Eight experts in reliability and agreement investigation developed guidelines for reporting. RESULTS Fifteen issues that should be addressed when reliability and agreement are reported are proposed. The issues correspond to the headings usually used in publications. CONCLUSION The proposed guidelines intend to improve the quality of reporting.
Preventive Veterinary Medicine | 1992
S. Wayne Martin; Mohamed Shoukri; Meg A. Thorburn
Abstract The effects of test sensitivity and specificity, and the impact of true prevalence of disease, on test results at the individual level are well known. When individual are tested to ascertain if an aggregate of animals (e.g. a herd) is affected by a condition of interest, the number of animals tested and the critical number of reactors used to decide the health status of the herd become very important in influencing the herd-level sensitivity and specificity. If the test specificity is less than 100%, as the number of animals tested increases, the probability of at least one false-positive animal increases—thus the herd specificity decreases. The herd sensitivity, herd negative predictive value and herd apparent prevalence increase directly with the number of animals tested, but the herd positive predictive value decreases. Herd sensitivity can be increased by using a test that is less than 100% specific. These features should be borne in mind when interpreting the natural history of disease, as well as when conducting disease surveys or disease-control campaigns based on surrogate tests.
Statistical Methods in Medical Research | 2004
Mohamed Shoukri; M H Asyali; Allan Donner
The reliability of continuous or binary outcome measures is usually assessed by estimation of the intraclass correlation coefficient (ICC). A crucial step for this purpose is the determination of the required sample size. In this review, we discuss the contributions made in this regard and derive the optimal allocation for the number of subjects k and the number of repeated measurements n that minimize the variance of the estimated ICC. Cost constraints are discussed for both normally and non-normally distributed responses, with emphasis on the case of dichotomous assessments. Tables showing optimal choices of k and n are given along with the guidelines for the efficient design of reliability studies.
Statistics in Medicine | 2000
Allan Donner; Mohamed Shoukri; Neil Klar; Emma Bartfay
Procedures are developed and compared for testing the equality of two dependent kappa statistics in the case of two raters and a dichotomous outcome variable. Such problems may arise when each of a sample of subjects are rated under two distinct settings, and it is of interest to compare the observed levels of inter-observer and intra-observer agreement. The procedures compared are extensions of previously developed procedures for comparing kappa statistics computed from independent samples. The results of a Monte Carlo simulation show that adjusting for the dependency between samples tends to be worthwhile only if the between-setting correlation is comparable in magnitude to the within-setting correlations. In this case, a goodness-of-fit procedure that takes into account the dependency between samples is recommended.
Epidemiology and Infection | 1997
Mutsuyo Kadohira; John J. McDermott; Mohamed Shoukri; M. N. Kyule
Variations in the sero-prevalence of antibody to brucella infection by cow, farm and area factors were investigated for three contrasting districts in Kenya: Samburu, an arid and pastoral area: Kiambu, a tropical highland area; and Kilifi, a typical tropical coastal area. Cattle were selected by a two-stage cluster sampling procedure and visited once between August 1991 and 1992. Schalls algorithm, a statistical model suitable for multi-level analysis was used. Using this model, older age, free grazing and large herd size (> or = 31) were associated with higher seroprevalence. Also, significant farm-to-farm, area-to-area and district-to-district variations were estimated. The patterns of high risk districts and areas seen were consistent with known animal husbandry and movement risk factors, but the larger than expected farm-to-farm variation within high risk areas and districts could not be explained. Thus, a multi-level method provided additional information beyond conventional analyses of sero-prevalence data.
Cancer | 1992
Walid A. Mourad; Bayzar Erkman-Balis; Sandra Livingston; Mohamed Shoukri; Charles E. Cox; Santo V. Nicosia; David T. Rowlands
Argyrophilic nucleolar organizer regions (AgNOR) have been correlated with proliferative activity of neoplasms. Increased AgNOR may reflect increased proliferative activity of cells or ploidy. To explore this hypothesis, 41 breast carcinomas were processed for AgNOR silver staining and DNA flow cytometry. AgNOR counts were expressed as mean AgNOR/nucleus and percentage of tumor cells with more than five AgNOR/nucleus. The first count was designated mean AgNOR or mAgNOR, and the second count was designated AgNOR proliferative index or pAgNOR. Using Mantel‐Haensel statistical analysis, carcinomas that exhibited mAgNOR of 2.4 or more had a high likelihood of aneuploidy (P < 0.0001), an S‐phase fraction of more than 5.8% (P < 0.003), or a diameter greater than 2 cm (P < 0.007). In addition, tumors with pAgNOR of 8% or more showed a statistically significant correlation with aneuploidy (P < 0.004), tumor grade (P < 0.04), and a more significant one with high S‐phase fraction (P < 0.0001). No significant correlation was obtained between pAgNOR and tumor size or lymph node status. These data indicate that AgNOR quantitation reflects changes in DNA ploidy and cell proliferation. They also suggest that the mean AgNOR counts correlate best with the DNA mass or ploidy and that the frequency of cells with higher AgNOR count best reflects proliferative activity or S‐phase fraction.
Preventive Veterinary Medicine | 1997
J.J. McDermott; Mutsuyo Kadohira; C.J O'Callaghan; Mohamed Shoukri
The relative variability of the sero-prevalence of antibodies to infectious bovine rhinotracheitis (IBR) due to cow, farm, and agroecological area levels were investigated for three contrasting districts in Kenya: Samburu, an arid and pastoral area; Kiambu, a tropical highland area; and Kilifi, a typical tropical coastal area. Cattle were selected by two-stage cluster sampling and visited once between August 1991 and 1992. Data on animal, farm, and area factors were analyzed using Schalls algorithm and MLn (multi-level, n-level), two generalized mixed-model programs suitable for multi-level analysis. Most variation in IBR sero-prevalence was from farm-to-farm. This was reflected by the many farm-level fixed effects (farm size, disease control measures and type of breeding) significant in models both ignoring and accounting for single variance components (clustering) at farm, area, and district levels. Area-to-area and district-to-district variations were noted but the area and district variance components were one-third and one-fifth the size of the farm variance components for both methods. As farm-to-farm variation differed markedly by farm size and district, models in MLn were extended to allow for multiple farm-level variance components by these categories. For each, sero-prevalence of IBR increased with age and was significantly decreased on small-sized zero-grazing farms. These models, particularly the model with different farm variance components by districts, fit the data better and highlighted well that there was considerable farm-to-farm variation--differing by district--and that the available farm-level fixed effects did not predict IBR sero-prevalence well.
Preventive Veterinary Medicine | 1997
S.W. Martin; John A. Eves; Leonard A. Dolan; Robert F. Hammond; John M. Griffin; J. D. Collins; Mohamed Shoukri
The proximity of farms to badger setts was compared between farms that had experienced a tuberculosis breakdown and those that had not, over the 6 year period from 1988 to 1993. The data were derived from a badger removal study conducted in East Offaly County in the Republic of Ireland. Badger removal began in 1989 and continued through 1993; by the end of 1990, approximately 80% of all badgers caught in the 6 year period had been removed. All badgers were examined, grossly, for evidence of tuberculosis. Tuberculosis status of the approximately 900 study herds was based on the results of the single intradermal comparative skin test and/or lesions of bovine tuberculosis. All herds were tested at least once annually. The number of herds experiencing bovine tuberculosis declined over the period, particularly in the years 1992 and 1993. The data on farm and badger sett location were stored and analysed, initially, in a geographical information system. Owing to the badger removal programme, the distance between the barn yard of a typical farm and the nearest occupied badger sett increased, by about 300 m year-1, and by about 600 m year-1 to the closest infected sett. In bivariate analyses, in the years 1988 and 1989, the risk of tuberculosis declined with increasing distance to a badger sett containing one or more tuberculous badgers. In multivariable logistic regression analyses, year and the average number of cattle tested per farm per year were controlled. A second identical analysis was conducted to control for the repeated observations on the same herds using generalised estimating equations. In both analyses, the risk of a multiple reactor tuberculosis breakdown decreased for herds at least 1000 m away from an infected badger sett, and increased as the number of infected badgers per infected sett increased. Despite the significantly reduced risk of a breakdown with increasing distance to infected badger setts, the relationship was not strong (sensitivity and specificity of the model in the low 70% range) and explained only 9-19% of tuberculosis breakdowns.
Preventive Veterinary Medicine | 1997
A. Busato; L. Steiner; S.W. Martin; Mohamed Shoukri; C. Gaillard
In 1993, an observational study was initiated to provide general information on animal health in extensive beef farms, to estimate disease frequency and the economic impact of calf diseases and to identify risk factors related to health and weight gain. The longitudinal study was conducted from fall 1993 until winter 94/95 and included 100 farms in western Switzerland. The basic concept was to follow one generation of calves on these farms and record all events concerning animal health from birth to weaning. The study population included 1270 calves (most were Angus crossbreds). Farm-management data were collected with a questionnaire conducted on the farm. Birth and weaning weights were obtained from the beef cattle breeding association. Clinical diagnoses and treatment costs were provided by the farm veterinarians. Two thirds of the dead calves were submitted to a complete postmortem examination. Fifty-three percent of the farms in the study were primary type income farms while 47% were secondary type income farms. Thirty-eight percent of the farms were situated in the lower areas of Switzerland, 14% in the prealpine foothills, the remaining 48% were located in mountain areas. Preweaning calf mortality was 5%. The main causes of calf deaths were respiratory diseases and digestive disorders. Twenty-two percent of the calves were treated at least once by a veterinarian; 36% of the treatments administered by the veterinarian were applied because of diarrhea, 27% because of respiratory diseases. Disease incidence was highest during the months of November, December and January. The association of disease and potential farm-level risk factors was analysed using chi 2-statistics and multivariable regression methods including generalized estimating equations to adjusted for herd effects. Specific risk factors for disease were not identified. Treatment for disease was not associated with 250-day standardized weight gain.
BMC Medical Research Methodology | 2008
Mohamed Shoukri; Dilek Colak; Namik Kaya; Allan Donner
BackgroundThe within-subject coefficient of variation and intra-class correlation coefficient are commonly used to assess the reliability or reproducibility of interval-scale measurements. Comparison of reproducibility or reliability of measurement devices or methods on the same set of subjects comes down to comparison of dependent reliability or reproducibility parameters.MethodsIn this paper, we develop several procedures for testing the equality of two dependent within-subject coefficients of variation computed from the same sample of subjects, which is, to the best of our knowledge, has not yet been dealt with in the statistical literature. The Wald test, the likelihood ratio, and the score tests are developed. A simple regression procedure based on results due to Pitman and Morgan is constructed. Furthermore we evaluate the statistical properties of these methods via extensive Monte Carlo simulations. The methodologies are illustrated on two data sets; the first are the microarray gene expressions measured by two plat- forms; the Affymetrix and the Amersham. Because microarray experiments produce expressions for a large number of genes, one would expect that the statistical tests to be asymptotically equivalent. To explore the behaviour of the tests in small or moderate sample sizes, we illustrated the methodologies on data from computer-aided tomographic scans of 50 patients.ResultsIt is shown that the relatively simple Walds test (WT) is as powerful as the likelihood ratio test (LRT) and that both have consistently greater power than the score test. The regression test holds its empirical levels, and in some occasions is as powerful as the WT and the LRT.ConclusionA comparison between the reproducibility of two measuring instruments using the same set of subjects leads naturally to a comparison of two correlated indices. The presented methodology overcomes the difficulty noted by data analysts that dependence between datasets would confound any inferences one could make about the differences in measures of reliability and reproducibility. The statistical tests presented in this paper have good properties in terms of statistical power.
Collaboration
Dive into the Mohamed Shoukri's collaboration.
Obihiro University of Agriculture and Veterinary Medicine
View shared research outputs