Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ross W. Filice is active.

Publication


Featured researches published by Ross W. Filice.


Journal of Digital Imaging | 2016

Comparison-Bot: an Automated Preliminary-Final Report Comparison System

Amit D. Kalaria; Ross W. Filice

Regular comparison of preliminary to final reports is a critical part of radiology resident and fellow education as prior research has documented substantial preliminary to final discrepancies. Unfortunately, there are many barriers to this comparison: high study volume; overnight rotations without an attending; the ability to finalize reports remotely; the subtle nature of many changes; and lack of easy access to the preliminary report after finalization. We developed a system that automatically compiles and emails a weekly summary of report differences for all residents and fellows. Trainees can also create a custom report using a date range of their choice and can view this data on a resident dashboard. Differences between preliminary and final reports are clearly highlighted with links to the associated study in Picture Archiving and Communication Systems (PACS) for efficient review and learning. Reports with more changes, particularly changes made in the impression, are highlighted to focus attention on those exams with substantive edits. Our system provides an easy way for trainees to review changes to preliminary reports with immediate access to the associated images, thereby improving their educational experience. Departmental surveys showed that our report difference summary is easy to understand and improves the educational experience of our trainees. Additionally, interesting descriptive statistics help us understand how reports are changed by trainee level, by attending, and by exam type. Finally, this system can be easily ported to other departments who have access to their Health Level 7 (HL7) data.


Journal of Digital Imaging | 2017

PathBot: A Radiology-Pathology Correlation Dashboard

Linda C. Kelahan; Amit D. Kalaria; Ross W. Filice

Pathology is considered the “gold standard” of diagnostic medicine. The importance of radiology-pathology correlation is seen in interdepartmental patient conferences such as “tumor boards” and by the tradition of radiology resident immersion in a radiologic-pathology course at the American Institute of Radiologic Pathology. In practice, consistent pathology follow-up can be difficult due to time constraints and cumbersome electronic medical records. We present a radiology-pathology correlation dashboard that presents radiologists with pathology reports matched to their dictations, for both diagnostic imaging and image-guided procedures. In creating our dashboard, we utilized the RadLex ontology and National Center for Biomedical Ontology (NCBO) Annotator to identify anatomic concepts in pathology reports that could subsequently be mapped to relevant radiology reports, providing an automated method to match related radiology and pathology reports. Radiology-pathology matches are presented to the radiologist on a web-based dashboard. We found that our algorithm was highly specific in detecting matches. Our sensitivity was slightly lower than expected and could be attributed to missing anatomy concepts in the RadLex ontology, as well as limitations in our parent term hierarchical mapping and synonym recognition algorithms. By automating radiology-pathology correlation and presenting matches in a user-friendly dashboard format, we hope to encourage pathology follow-up in clinical radiology practice for purposes of self-education and to augment peer review. We also hope to provide a tool to facilitate the production of quality teaching files, lectures, and publications. Diagnostic images have a richer educational value when they are backed up by the gold standard of pathology.


Academic Radiology | 2018

The Impact of Interruptions on Chest Radiograph Interpretation: Effects on Reading Time and Accuracy

Rachel M. Wynn; Jessica L. Howe; Linda C. Kelahan; Allan Fong; Ross W. Filice; Raj M. Ratwani

RATIONALE AND OBJECTIVES The objective of this study was to experimentally test the effect of interruptions on image interpretation by comparing reading time and response accuracy of interrupted case reads to uninterrupted case reads in resident and attending radiologists. MATERIALS AND METHODS Institutional review board approval was obtained before participant recruitment from an urban academic health-care system during January 2016-March 2016. Eleven resident and 12 attending radiologists examined 30 chest radiographs, rating their confidence regarding the presence or the absence of a pneumothorax. Ten cases were normal (ie, no pneumothorax present), 10 cases had an unsubtle pneumothorax (ie, readily perceivable by a nonexpert), and 10 cases had a subtle pneumothorax. During three reads of each case type, the participants were interrupted with 30 seconds of a secondary task. The total reading time and the accuracy of interrupted and uninterrupted cases were compared. A mixed-factors analysis of variance was run on reading time and accuracy with experience (resident vs attending) as a between-subjects factor and case type (normal, unsubtle, or subtle) and interruption (interruption vs no interruption) as within-subjects factors. RESULTS Interrupted tasks had significantly longer reading times than uninterrupted cases (P = .032). During subtle cases, interruptions reduced accuracy (P = .034), but during normal cases, interruptions increased accuracy (P = .038). CONCLUSIONS Interruptions increased reading times and increased the tendency for a radiologist to conclude that a case is normal for both resident and attending radiologists, demonstrating that interruptions reduce efficiency and introduce patient safety concerns during reads of abnormal cases.


Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care | 2018

Interruptions Increase Time and Decrease Accuracy of Chest Radiograph Interpretation

Rachel M. Wynn; Jessica L. Howe; Linda C. Kelahan; Allan Fong; Ross W. Filice; Raj M. Ratwani

Interruptions are frequent in healthcare and their detrimental effects have been documented from the emergency department, in which 18.5% of interrupted tasks are never returned to, to medication administration, where the occurrence and frequency of interruptions are related to errors. One area of health care in which the effect of interruptions has not been well characterized is radiology, which has a unique workflow with unique cognitive demands. In fact, interruptions may be particularly detrimental to radiology, which depends on an individual’s visual search capacities; cognitive psychology research has demonstrated that interruptions eradicate search memory.


Journal of Digital Imaging | 2018

Minimizing Barriers in Learning for On-Call Radiology Residents—End-to-End Web-Based Resident Feedback System

Hailey H Choi; Jennifer Clark; Ann K Jay; Ross W. Filice

Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents’ preliminary interpretation and the attendings’ final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous “read-out” or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.


Journal of Digital Imaging | 2018

The Radiologist’s Gaze: Mapping Three-Dimensional Visual Search in Computed Tomography of the Abdomen and Pelvis

Linda C. Kelahan; Allan Fong; Joseph Blumenthal; Swaminathan Kandaswamy; Raj M. Ratwani; Ross W. Filice

A radiologist’s search pattern can directly influence patient management. A missed finding is a missed opportunity for intervention. Multiple studies have attempted to describe and quantify search patterns but have mainly focused on chest radiographs and chest CTs. Here, we describe and quantify the visual search patterns of 17 radiologists as they scroll through 6 CTs of the abdomen and pelvis. Search pattern tracings varied among individuals and remained relatively consistent per individual between cases. Attendings and trainees had similar eye metric statistics with respect to time to first fixation (TTFF), number of fixations in the region of interest (ROI), fixation duration in ROI, mean saccadic amplitude, or total number of fixations. Attendings had fewer numbers of fixations per second versus trainees (p < 0.001), suggesting efficiency due to expertise. In those cases that were accurately interpreted, TTFF was shorter (p = 0.04), the number of fixations per second and number of fixations in ROI were higher (p = 0.04, p = 0.02, respectively), and fixation duration in ROI was increased (p = 0.02). We subsequently categorized radiologists as “scanners” or “drillers” by both qualitative and quantitative methods and found no differences in accuracy with most radiologists being categorized as “drillers.” This study describes visual search patterns of radiologists in interpretation of CTs of the abdomen and pelvis to better approach future endeavors in determining the effects of manipulations such as fatigue, interruptions, and computer-aided detection.


Journal of Digital Imaging | 2017

Who You Gonna Call? Automatically Connecting Radiologists to the Right Clinician

Ross W. Filice

Contacting clinicians to convey critical results is a critical part of radiology workflow, but many obstacles prevent easy and timely communication. Integration of radiology applications and workflow with an EHR-based patient coverage database demonstrated subjective and objective improvement in radiologist workflow and satisfaction.


Journal of Digital Imaging | 2016

Notification System to Address PACS Filter Deficiencies and Ensure Timely Interpretation of Neonatal Exams.

Ross W. Filice

Filtered radiology worklists can result in exams that slip through the cracks and do not get interpreted. We discovered an error that caused neonatal exams to not be displayed on our worklists, and therefore, these exams were not interpreted in a timely fashion. Because of familiarity with our departmental data, we were able to rapidly build a notification tool to alert us to these exams. This tool resulted in clinically significant impact on interpretation turnaround time and care of neonates in our hospital.


Journal of Digital Imaging | 2015

Improving Radiology Report Quality by Rapidly Notifying Radiologist of Report Errors

Matthew J. Minn; Arash R. Zandieh; Ross W. Filice


Journal of The American College of Radiology | 2016

Call Case Dashboard: Tracking R1 Exposure to High-Acuity Cases Using Natural Language Processing

Linda C. Kelahan; Allan Fong; Raj M. Ratwani; Ross W. Filice

Collaboration


Dive into the Ross W. Filice's collaboration.

Top Co-Authors

Avatar

Linda C. Kelahan

MedStar Georgetown University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amit D. Kalaria

MedStar Georgetown University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander S. Somwaru

MedStar Georgetown University Hospital

View shared research outputs
Top Co-Authors

Avatar

Ann K Jay

MedStar Georgetown University Hospital

View shared research outputs
Top Co-Authors

Avatar

Arash R. Zandieh

MedStar Georgetown University Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge