Michael L. Astion
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael L. Astion.
American Journal of Clinical Pathology | 2003
Michael L. Astion; Kaveh G. Shojania; Timothy Hamill; Sara Kim; Valerie L. Ng
We developed a laboratory incident report classification system that can guide reduction of actual and potential adverse events. The system was applied retrospectively to 129 incident reports occurring during a 16-month period. Incidents were classified by type of adverse event (actual or potential), specific and potential patient impact, nature of laboratory involvement, testing phase, and preventability. Of 129 incidents, 95% were potential adverse events. The most common specific impact was delay in receiving test results (85%). The average potential impact was 2.9 (SD, 1.0; median, 3; scale, 1-5). The laboratory alone was responsible for 60% of the incidents; 21% were due solely to problems outside the laboratorys authority. The laboratory function most frequently implicated in incidents was specimen processing (31%). The preanalytic testing phase was involved in 71% of incidents, the analytic in 18%, and the postanalytic in 11%. The most common preanalytic problem was specimen transportation (16%). The average preventability score was 4.0 (range, 1-5; median, 4; scale, 1-5), and 94 incidents (73%) were preventable (score, 3 or more). Of the 94 preventable incidents, 30% involved cognitive errors, defined as incorrect choices caused by insufficient knowledge, and 73% involved noncognitive errors, defined as inadvertent or unconscious lapses in expected automatic behavior.
American Journal of Clinical Pathology | 2013
Min Xu; Xuan Qin; Michael L. Astion; Joe C. Rutledge; Joanne Simpson; Keith R. Jerome; Janet A. Englund; Danielle M. Zerr; Russell Migita; Shannon Rich; John C. Childs; Anne Cent; Mark A. Del Beccaro
Abstract The FilmArray respiratory virus panel detects 15 viral agents in respiratory specimens using polymerase chain reaction. We performed FilmArray respiratory viral testing in a core laboratory at a regional children’s hospital that provides service 24 hours a day 7 days a week. The average and median turnaround time were 1.6 and 1.4 hours, respectively, in contrast to 7 and 6.5 hours documented 1 year previously at an on-site reference laboratory using a direct fluorescence assay (DFA) that detected 8 viral agents. During the study period, rhinovirus was detected in 20% and coronavirus in 6% of samples using FilmArray; these viruses would not have been detected with DFA. We followed 97 patients with influenza A or influenza B who received care at the emergency department (ED). Overall, 79 patients (81%) were given oseltamivir in a timely manner defined as receiving the drug in the ED, a prescription in the ED, or a prescription within 3 hours of ED discharge. Our results demonstrate that molecular technology can be successfully deployed in a nonspecialty, high-volume, multidisciplinary core laboratory.
Archives of Pathology & Laboratory Medicine | 2014
Jane A. Dickerson; Bonnie Cole; Jessie H. Conta; Monica Wellner; Stephanie E. Wallace; Rhona M. Jack; Joe C. Rutledge; Michael L. Astion
CONTEXT Tests that are performed outside of the ordering institution, send-out tests, represent an area of risk to patients because of complexity associated with sending tests out. Risks related to send-out tests include increased number of handoffs, ordering the wrong or unnecessary test, specimen delays, data entry errors, preventable delays in reporting and acknowledging results, and excess financial liability. Many of the most expensive and most misunderstood tests are send-out genetic tests. OBJECTIVE To design and develop an active utilization management program to reduce the risk to patients and improve value of genetic send-out tests. DESIGN Send-out test requests that met defined criteria were reviewed by a rotating team of doctoral-level consultants and a genetic counselor in a pediatric tertiary care center. RESULTS Two hundred fifty-one cases were reviewed during an 8-month period. After review, nearly one-quarter of genetic test requests were modified in the downward direction, saving a total of 2% of the entire send-out bill and 19% of the test requests under management. Ultimately, these savings were passed on to patients. CONCLUSIONS Implementing an active utilization strategy for expensive send-out tests can be achieved with minimal technical resources and results in improved value of testing to patients.
Journal of Clinical Microbiology | 2005
Shan Yuan; Michael L. Astion; Jeff Schapiro; Ajit P. Limaye
ABSTRACT We developed a strategy to determine the clinical impact associated with errors in clinical microbiology testing. Over a 9-month period, we used a sequential three-stage method to prospectively evaluate 480 consecutive corrected microbiology laboratory reports. The three stages were physician review of the corrected report, medical record review, and interview with the clinician(s) taking care of the patient. Of the 480 corrected reports, 301 (62.7%) were ruled out for significant clinical impact by physician review and an additional 25 cases (5.2%) were ruled out for clinical impact by medical record review. This left 154 cases (32.1%) that required clinician interview to determine clinical impact. The clinician interview revealed that 32 (6.7%) of the corrected reports were associated with adverse clinical impact. Of these 32 cases, 19 (59.4%) involved delayed therapy, 8 (25.0%) involved unnecessary therapy, 8 (25.0%) were associated with inappropriate therapy, and 4 (12.5%) were associated with an increased level of care. The laboratory was entirely responsible for the error in 28 (87.5%) of the 32 cases and partially responsible in the other 4 cases (12.5%). Twenty-six (81.3%) of the 32 cases involved potentially preventable analytic errors that were due to lack of knowledge (cognitive error). In summary, we used evaluation of corrected reports to identify laboratory errors with adverse clinical impact, and most of the errors were amenable to laboratory-based interventions. Our method has the potential to be implemented in other laboratory settings to identify and characterize errors that impact patient safety.
Clinical Chemistry | 2003
Shan Yuan; Claudia Bien; Mark H. Wener; Peter A. Simkin; Petrie M. Rainey; Michael L. Astion
The crystal arthropathies, gout and calcium pyrophosphate dihydrate deposition disease, are caused by deposition of monosodium urate (MSU) or calcium pyrophosphate dihydrate (CPPD) crystals, respectively. A diagnosis of urate gout or calcium pyrophosphate dihydrate deposition disease is based on characteristic clinical findings and the microscopic identification of intracellular crystals in synovial fluid. Several studies have shown the lack of sensitivity of microscopic examination of synovial fluid for MSU or CPPD crystals [sensitivity, 78% (1) and 79% (2) for MSU and 12% (1) and 67%(2) for CPPD]. Not surprisingly, this leads to a lack of reproducibility of synovial fluid analyses (1)(2). The suboptimal sensitivity, frequently attributed to the low concentrations or the small sizes of the crystals, has been difficult to improve without resorting to clinically impractical methods such as crystal extraction from synovial fluid (3) or electron microscopy(4). Problems with sensitivity have led experts to caution that a negative examination by polarized light microscopy does not exclude the presence of small numbers of crystals (5). We have occasionally encountered synovial fluids from patients with gout that were negative for urate …
American Journal of Clinical Pathology | 2006
Nancy Goodyear; Sara Kim; Mary Reeves; Michael L. Astion
We used a computer-based competency assessment tool for Gram stain interpretation to assess the performance of 278 laboratory staff from 40 laboratories on 40 multiple-choice questions. We report test reliability, mean scores, median, item difficulty, discrimination, and analysis of the highest- and lowest-scoring questions. The questions were reliable (KR-20 coefficient, 0.80). Overall mean score was 88% (range, 63%-98%). When categorized by cell type, the means were host cells, 93%; other cells (eg, yeast), 92%; gram-positive, 90%; and gram-negative, 88%. When categorized by type of interpretation, the means were other (eg, underdecolorization), 92%; identify by structure (eg, bacterial morphologic features), 91%; and identify by name (eg, genus and species), 87%. Of the 6 highest-scoring questions (mean scores, > or = 99%) 5 were identify by structure and 1 was identify by name. Of the 6 lowest-scoring questions (mean scores, < 75%) 5 were gram-negative and 1 was host cells. By type of interpretation, 2 were identify by structure and 4 were identify by name. Computer-based Gram stain competency assessment examinations are reliable. Our analysis helps laboratories identify areas for continuing education in Gram stain interpretation and will direct future revisions of the tests.
American Journal of Clinical Pathology | 2002
Michael L. Astion; Sara Kim; Enrique Terrazas; Amanda Nelson
The University of Washington, Seattle, has developed educational software for clinical laboratories. We used a 32-question survey to study software implementation. Of 106 clinical laboratories (response rate, 60%) that purchased the software and completed the survey, 89 laboratories (84%) that reported using the software formed the basis for the study. The most common software users were laboratory personnel, followed by medical technologist or medical laboratory technician students, residents, and medical students; the mean (SD) number of personnel categories using the software per laboratory was 1.8 (0.8). The most common reasons for use were initial instruction, cross-training, and competency assessment. The most frequent setting for software use was an area where laboratory testing occurred, followed by a dedicated training location, a location chosen by the employee, a classroom, and a distance learning mode. On a scale of 1 (poor) to 5 (excellent), the average satisfaction rating as an instructional tool was 4.4 and as a competency assessment tool, 4.2. Compared with laboratories in hospitals with 400 beds or fewer, laboratories in hospitals with more than 400 beds used the software for more categories of users (P = .008), had a higher proportion of laboratories using it for residents (P = .003), and had a higher proportion of laboratories with dedicated training areas (P = .02).
American Journal of Clinical Pathology | 2016
Patrick C. Mathias; Jessie H. Conta; Eric Q. Konnick; Darci L. Sternen; Shannon Stasi; Bonnie Cole; Michael L. Astion; Jane A. Dickerson
OBJECTIVES To characterize error rates for genetic test orders between medical specialties and in different settings by examining detailed order information. METHODS We performed a retrospective analysis of a detailed utilization management case database, comprising 2.5 years of data and almost 1,400 genetic test orders. After review by multiple reviewers, we categorized order modifications and cancellations, quantified rates of positive results and order errors, and compared genetics with nongenetics providers and inpatient with outpatient orders. RESULTS High cost or problems with preauthorization were the most common reasons for modification and cancellation, respectively. The cancellation rate for nongenetics providers was three times the rate for geneticists, but abnormal result rates were similar between the two groups. The approval rate for inpatient orders was not significantly lower than outpatient orders, and abnormal result rates were similar for these two groups as well. Order error rates were approximately 8% among tests recommended by genetics providers in the inpatient setting, and tests ordered or recommended by nongeneticists had error rates near 5% in both inpatient and outpatient settings. CONCLUSIONS Clinicians without specialty training in genetics make genetic test order errors at a significantly higher rate than geneticists. A laboratory utilization management program prevents these order errors from becoming diagnostic errors and reaching the patient.
Academic Medicine | 1996
Lynn P. Mandel; Douglas C. Schaad; Brad T. Cookson; Janet D. Curtis; Adam R. Orkand; Mark H. Wener; Carol N. LeCrone; Dawn E. DeWitt; Michael L. Astion
No abstract available.
British Journal of Educational Technology | 2000
Sara Kim; Michael L. Astion
This study examined how students interacted with a computer-based feature, Compare and Contrast, which facilitated image comparisons. Unlike previous research, which explored the use of images ordered and presented by investigators, this study examined emerging patterns of image comparison as students selected the presentation mode. Three main image-viewing modes emerged including one successive mode and two simultaneous modes: (1) single image viewing in which only one image was viewed at a time; (2) paired viewing in which different pairs of images were displayed; and (3) anchored viewing in which a single image in one image panel served as an anchor against which multiple image comparisons were made using the second panel. Overall, anchored viewing was the most predominant image-viewing mode used by the students (41%) compared to single viewing (22%) and paired viewing (11%). Students who viewed images in the anchored-viewing mode attained the highest scores on the post-test exam. Our study suggests that a computer instructional program with a user-controlled interactivity feature can provide insights on how learners form different types of visual comparison strategies. Future experimental studies involving interface design that explicitly supports single, paired, and anchored viewing modes could confirm or challenge the results of our study and therefore, contribute to on-going research on the effective mode of image presentation for visual concept acquisition.