Robert C. Lee
University of Calgary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert C. Lee.
Archive | 2004
Leanne Kmet; Robert C. Lee; Linda S. Cook
Does the first sentence contain a clear statement of the purpose of the article (without starting....The purpose of this article is to.....) Is the test population briefly described? Does it conclude with a statement of the experiment’s conclusions? STANDARD QUALITY ASSESSMENT CRITERIA FOR EVALUATING PRIMARY RESEARCH PAPERS FROM A VARIETY OF FIELDS Prepared by: Leanne M. Kmet, Robert C. Lee and Linda S. Cook Quantitative Studies Questions 1. Question / objective sufficiently described? 2. Study design evident and appropriate? 3. Method of subject/comparison group selection or source of information/input variables described and appropriate? 4. Subject (and comparison group, if applicable) characteristics sufficiently described? 5. If interventional and random allocation was possible, was it described? 6. If interventional and blinding of investigators was possible, was it reported? 7. If interventional and blinding of subjects was possible, was it reported? 8. Outcome and (if applicable) exposure measure(s) well defined and robust to measurement / misclassification bias? means of assessment reported? 9. Sample size appropriate? 10. Analytic methods described/justified and appropriate? 11. Some estimate of variance is reported for the main results? 12. Controlled for confounding? 13. Results reported in sufficient detail? 14. Conclusions supported by the results? Manual for Quality Scoring of Quantitative Studies Definitions and Instructions for Quality Assessment Scoring How to calculate the summary score: Total sum = (number of “yes” * 2) + (number of “partials” * 1) Total possible sum = 28 – (number of “N/A” * 2) Summary score: total sum / total possible sum 1. Question or objective sufficiently described? Yes: Is easily identified in the introductory section (or first paragraph of methods section). Specifies (where applicable, depending on study design) all of the following: purpose, subjects/target population, and the specific intervention(s) /association(s)/descriptive parameter(s) under investigation. A study purpose that only becomes apparent after studying other parts of the paper is not considered sufficiently described. Partial: Vaguely/incompletely reported (e.g. “describe the effect of” or “examine the role of” or “assess opinion on many issues” or “explore the general attitudes”...); or some information has to be gathered from parts of the paper other than the introduction/background/objective section. No: Question or objective is not reported, or is incomprehensible. N/A: Should not be checked for this question. 2. Design evident and appropriate to answer study question? Note: If the study question is not given, infer from the conclusions Yes: Design is easily identified and is appropriate to address the study question / objective. Partial: Design and /or study question not clearly identified, but gross inappropriateness is not evident; or design is easily identified but only partially addresses the study question. No: Design used does not answer study question (e.g., a comparison group is required to answer the study question, but none was used); or design cannot be identified. N/A: Should not be checked for this question. 3. Method of subject selection (and comparison group selection, if applicable) or source of information/input variables (e.g., for decision analysis) is described and appropriate. Yes: Described and appropriate. Selection strategy designed (i.e., consider sampling frame and strategy) to obtain an unbiased sample of the relevant target population or the entire target population of interest (e.g., consecutive patients for clinical trials, population-based random sample for case-control studies or surveys). Where applicable, inclusion/exclusion criteria are described and defined (e.g., “cancer” -ICD code or equivalent should be provided). Studies of volunteers: methods and setting of recruitment reported. Surveys: sampling frame/ strategy clearly described and appropriate. Partial: Selection methods (and inclusion/exclusion criteria, where applicable) are not completely described, but no obvious inappropriateness. Or selection strategy is not ideal (i.e., likely introduced bias) but did not likely seriously distort the results (e.g., telephone survey sampled from listed phone numbers only; hospital based case-control study identified all cases admitted during the study period, but recruited controls admitted during the day/evening only). Any study describing participants only as “volunteers” or “healthy volunteers”. Surveys: target population mentioned but sampling strategy unclear. No: No information provided. Or obviously inappropriate selection procedures (e.g., inappropriate comparison group if intervention in women is compared to intervention in men). Or presence of selection bias which likely seriously distorted the results (e.g., obvious selection on “exposure” in a case-control study). N/A: Descriptive case series/reports. 4. Subject (and comparison group, if applicable) characteristics or input variables/information (e.g., for decision analyses) sufficiently described? Yes: Sufficient relevant baseline/demographic information clearly characterizing the participants is provided (or reference to previously published baseline data is provided). Where applicable, reproducible criteria used to describe/categorize the participants are clearly defined (e.g., ever-smokers, depression scores, systolic blood pressure > 140). If “healthy volunteers” are used, age and sex must be reported (at minimum). Decision analyses: baseline estimates for input variables are clearly specified. Partial: Poorly defined criteria (e.g. “hypertension”, “healthy volunteers”, “smoking”). Or incomplete relevant baseline / demographic information (e.g., information on likely confounders not reported). Decision analyses: incomplete reporting of baseline estimates for input variables. No: No baseline / demographic information provided. Decision analyses: baseline estimates of input variables not given. N/A: Should not be checked for this question. 5. If random allocation to treatment group was possible, is it described? Yes: True randomization done requires a description of the method used (e.g., use of random numbers). Partial: Randomization mentioned, but method is not (i.e. it may have been possible that randomization was not true). No: Random allocation not mentioned although it would have been feasible and appropriate (and was possibly done). N/A: Observational analytic studies, uncontrolled experimental studies, surveys, descriptive case series / reports, decision analyses 6. If interventional and blinding of investigators to intervention was possible, is it reported? Yes: Blinding reported. Partial: Blinding reported, but it is not clear who was blinded. No: Blinding would have been possible (and was possibly done) but is not reported. N/A: Observational analytic studies, uncontrolled experimental studies, surveys, descriptive case series and reports, decision analyses. 7. If interventional and blinding of subjects to intervention was possible, is it reported? Yes: Blinding reported. Partial: Blinding reported but it is not clear who was blinded. No: Blinding would have been possible (and was possibly done) but is not reported. N/A: Observational studies, uncontrolled experimental studies, surveys, descriptive case series / reports. 8. Outcome and (if applicable) exposure measure(s) well defined and robust to measurement / misclassification bias? Means of assessment reported? Yes: Defined (or reference to complete definitions is provided) and measured according to reproducible, “objective” criteria (e.g., death, test completion – yes/no, clinical scores). Little or minimal potential for measurement / misclassification errors. Surveys: clear description (or reference to clear description) of questionnaire/interview content and response options. Decision analyses: sources of uncertainty are defined for all input variables. Partial: Definition of measures leaves room for subjectivity, or not sure (i.e., not reported in detail, but probably acceptable). Or precise definition(s) are missing, but no evidence or problems in the paper that would lead one to assume major problems. Or instrument/mode of assessment(s) not reported. Or misclassification errors may have occurred, but they did not likely seriously distort the results (e.g., slight difficulty with recall of long-ago events; exposure is measured only at baseline in a long cohort study). Surveys: description of questionnaire/interview content incomplete; response options unclear. Decision analyses: sources of uncertainty are defined only for some input variables. No: Measures not defined, or are inconsistent throughout the paper. Or measures employ only ill-defined, subjective assessments, e.g. “anxiety” or “pain.” Or obvious misclassification errors /measurement bias likely seriously distorted the results (e.g., a prospective cohort relies on selfreported outcomes among the “unexposed” but requires clinical assessment of the “exposed”). Surveys: No description of questionnaire/interview content or response options. Decision analyses: sources of uncertainty are not defined for input variables. N/A: Descriptive case series / reports. 9. Sample size appropriate? Yes: Seems reasonable with respect to the outcome under study and the study design. When statistically significant results are achieved for major outcomes, appropriate sample size can usually be assumed, unless large standard errors (SE > 1⁄2 effect size) and/or problems with multiple testing are evident. Decision analyses: size of modeled cohort / number of iterations specified and justified. Partial: Insufficient data to assess sample size (e.g., sample seems “small” and there is no mention of power/sample size/effect size of interest and/or variance estimates aren’t provided). Or some statistically significant re
Quality & Safety in Health Care | 2007
David L. Cooke; Peter Dunscombe; Robert C. Lee
Objectives: To motivate improvements in an organisational system by measuring staff perceptions of the organisation’s ability to learn from incidents and by analysing their personal experience of incidents. Methods: Respondents were questioned on the components of the incident learning system from both a personal and an organisational perspective. The respondents (n = 125) were radiotherapists, nurses, dosimetrists, doctors, and other staff at a major academic cancer centre. Responses were analysed in terms of per cent positive responses and response rate, differences between “frontline” and “support” staff, and the respondent’s experience with incidents. Results: Respondents were more familiar with and more positive about incident identification and reporting—the first two stages of incident learning. Their overall perception of incident learning was most influenced by the investigation and learning components of the system. Respondents in frontline positions were more positive than those in support positions about responding to, identifying and reporting incidents. Respondents reported having experienced a mean of three incidents per year, of which two were reported and two out of three of the reported incidents were investigated, and a median of two incidents being experienced and reported, but none investigated. Most incidents experienced were not captured by the organisation’s existing incident reporting system. Conclusion: The survey tool was effective in measuring the ability of the organisation to learn from incidents. Implications of the survey results for improving organisational learning are discussed.
Medical Decision Making | 2004
Scott B. Patten; Robert C. Lee
Background. Serial period prevalence estimates for recurrent diseases such as major depression are available more frequently than fully detailed longitudinal data, but it is difficult to estimate incidence and episode duration from such data. Incidence and episode duration are critical decision modeling parameters for recurrent diseases. Objectives. To reduce bias that would otherwise occur in national incidence and duration-of-episode estimates for major depressive episodes deriving from studies using serial period prevalence data and to illustrate amethodological approach for the estimation of incidence from such studies. Methods. Monte Carlo simulation was applied to a Markov process describing incidence and recovery from major depressive episodes. Results. The annual incidence and episode duration were found to be 3.1% and 17.1 weeks, respectively. These estimates are expected to be less subject to bias than those generated without modeling. Conclusions. These results highlight the usefulness of Markov models for analysis of longitudinal data. The methods described here may be useful for decision modeling andmay be generalizable to other chronic diseases.
American Journal of Obstetrics and Gynecology | 2010
Linda S. Cook; Heather K. Neilson; Diane L. Lorenzetti; Robert C. Lee
OBJECTIVE We assessed the evidence supporting a reduction in risk for ovarian cancer occurrence or mortality with greater vitamin D exposures. STUDY DESIGN This review followed standard guidelines for systematic literature reviews. The diverse study designs precluded a quantitative metaanalysis. Therefore studies are summarized via tables and abstracted information. RESULTS Approximately half of the ecologic and case-control studies reported reductions in incidence or mortality with increasing geographic latitude, solar radiation levels, or dietary/supplement consumption of vitamin D, whereas the other half reported null associations. The cohort studies reported no overall risk reduction with increasing dietary/supplement consumption of vitamin D or with plasma levels of vitamin D prior to diagnosis, although vitamin D intakes were relatively low in all studies. CONCLUSION There is no consistent or strong evidence to support the claim made in numerous review articles that vitamin D exposures reduce the risk for ovarian cancer occurrence or mortality.
Medical Decision Making | 2006
Robert C. Lee; Edidiong Ekaette; Karie-Lynn Kelly; Peter Craighead; Chris Newcomb; Peter Dunscombe
Introduction . Radiation therapy (RT) for cancer is a critical medical procedure that occurs in a complex environment involving numerous health professionals, hardware, software, and equipment. Uncertainties and potential incidents can lead to inappropriate administration of radiation to patients, with sometimes catastrophic consequences such as premature death or appreciably impaired quality of life. The authors evaluate the impact of incorrectly staging (i.e., estimation of extent of cancer) breast cancer patients and resulting inappropriate treatment decisions. Methods . The authors employ analytic and simulation methods in an influence-diagram framework to estimate the probability of incorrect staging and treatment decisions. As inputs, they use a combination of literature information on the accuracy and precision of pathology and tests as well as expert judgment. Sensitivity and value-of-information analyses are conducted to identify important uncertainties. Results and conclusions . The authors find a small but nontrivial probability that breast cancer patients will be incorrectly staged and thus may be subjected to inappropriate treatment. Results are sensitive to a number of variables, and some routinely used tests for metastasis have very limited information value. This work has implications for the methods used in cancer staging, and the methods are generalizable for quantitative risk assessment of treatment errors.
Medical Care | 2003
Robert C. Lee; Cam Donaldson; Linda S. Cook
Statement of Problem. Many healthcare decisions are difficult because they are complex and have important consequences such as the impact on survival or quality-of-life of individuals and on allocation of limited resources. The present state-of-the-art in healthcare decision modeling is often inadequate to properly assess these decisions. Methods. Based on a literature search and the experience of the authors, typical methodologies used in healthcare decision analysis modeling are explored and compared with methods used in other practices. An example of hormonal therapy decisions is used. Results. Useful methods that have been developed in other fields are presented. These include methods targeted toward appropriate assessment and representation of the complexity of decisions, assessment of uncertainty, use of nonexpected value decision analysis, and use of multi-attribute decision criteria. Conclusion. The state-of-the-art in healthcare decision modeling can be improved through learning from other practices.
International Journal of Radiation Oncology Biology Physics | 2008
Peter Dunscombe; Edidiong Ekaette; Robert C. Lee; David L. Cooke
Recent publications in both the scientific and the popular press have highlighted the risks to which patients expose themselves when entering a healthcare system. Patient safety issues are forcing us to, not only acknowledge that incidents do occur, but also actively develop the means for assessing and managing the risks of such incidents. To do this, we ideally need to know the probability of an incidents occurrence, the consequences or severity for the patient should it occur, and the basic causes of the incident. A structured approach to the description of failure modes is helpful in terms of communication, avoidance of ambiguity, and, ultimately, decision making for resource allocation. In this report, several classification schemes or taxonomies for use in risk assessment and management are discussed. In particular, a recently developed approach that reflects the activity domains through which the patient passes and that can be used as a basis for quantifying incident severity is described. The estimation of incident severity, which is based on the concept of the equivalent uniform dose, is presented in some detail. We conclude with a brief discussion on the use of a defined basic-causes table and how adding such a table to the reports of incidents can facilitate the allocation of resources.
Population Health Metrics | 2005
Scott B. Patten; Robert C. Lee
BackgroundMost epidemiological studies of major depression report period prevalence estimates. These are of limited utility in characterizing the longitudinal epidemiology of this condition. Markov models provide a methodological framework for increasing the utility of epidemiological data. Markov models relating incidence and recovery to major depression prevalence have been described in a series of prior papers. In this paper, the models are extended to describe the longitudinal course of the disorder.MethodsData from three national surveys conducted by the Canadian national statistical agency (Statistics Canada) were used in this analysis. These data were integrated using a Markov model. Incidence, recurrence and recovery were represented as weekly transition probabilities. Model parameters were calibrated to the survey estimates.ResultsThe population was divided into three categories: low, moderate and high recurrence groups. The size of each category was approximated using lifetime data from a study using the WHO Mental Health Composite International Diagnostic Interview (WMH-CIDI). Consistent with previous work, transition probabilities reflecting recovery were high in the initial weeks of the episodes, and declined by a fixed proportion with each passing week.ConclusionMarkov models provide a framework for integrating psychiatric epidemiological data. Previous studies have illustrated the utility of Markov models for decomposing prevalence into its various determinants: incidence, recovery and mortality. This study extends the Markov approach by distinguishing several recurrence categories.
Epidemiologia E Psichiatria Sociale-an International Journal for Epidemiology and Psychiatric Sciences | 2004
Scott B. Patten; Robert C. Lee
AIMS The substantial impact of major depression on population health is widely acknowledged. To date, health system responses to this condition have been largely shaped by observational findings. In the future, health policy decisions will benefit from an increasingly integrated and dynamic understanding of the epidemiology of this condition. Policy decisions can also be supported by the development of decision-support tools that can simulate the impact of alternative policy decisions on population health. Markov models are useful both in epidemiological modelling and in decision analysis. METHODS In this project, a Markov model describing major depression epidemiology was developed. The model employed a Markov Tunnel in order to depict the dependence of recovery probabilities on episode duration. Transition probabilities, including incidence, recovery and mortality were estimated from Canadian national survey data. RESULTS Episode incidence was approximately 3% per year. Recovery rates declined exponentially over time. The model predicted point prevalence at slightly less than 1%, agreeing closely with observed prevalence data. CONCLUSIONS Epidemiological models describing the dynamic relationships between major depression incidence, prevalence, recovery and mortality can help to integrate available epidemiological data. Such models offer an attractive option for support of health policy decisions.
Journal of the Operational Research Society | 2007
Edidiong Ekaette; Robert C. Lee; K.-L. Kelly; Peter Dunscombe
Radiation treatment (RT) for cancer is a critical medical procedure that occurs in a complex environment that is subject to uncertainties and errors. We employed a simulation (a variant of Monte Carlo) model that followed a cohort of hypothetical breast cancer patients to estimate the probability of incorrect staging and treatment decisions. As inputs, we used a combination of literature information and expert judgement. Input variables were defined as probability distributions within the model. Uncertainties were propagated via simulation. Sensitivity and value-of-information analyses were then conducted to quantify the effect of variable uncertainty on the model outputs. We found a small but non-trivial probability that patients would be incorrectly staged and thus be subjected to inappropriate treatment. Some routinely used tests for staging and metastasis detection have very limited informational value. This work has implications for the methods used in cancer staging and subsequent risk assessment of treatment errors.