Phillip B. Warner
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Phillip B. Warner.
Journal of Biomedical Informatics | 2017
Polina V. Kukhareva; Catherine J. Staes; Kevin W. Noonan; Heather Mueller; Phillip B. Warner; David E Shields; Howard Weeks; Kensaku Kawamoto
OBJECTIVE Develop evidence-based recommendations for single-reviewer validation of electronic phenotyping results in operational settings. MATERIAL AND METHODS We conducted a randomized controlled study to evaluate whether electronic phenotyping results should be used to support manual chart review during single-reviewer electronic phenotyping validation (N=3104). We evaluated the accuracy, duration and cost of manual chart review with and without the availability of electronic phenotyping results, including relevant patient-specific details. The cost of identification of an erroneous electronic phenotyping result was calculated based on the personnel time required for the initial chart review and subsequent adjudication of discrepancies between manual chart review results and electronic phenotype determinations. RESULTS Providing electronic phenotyping results (vs not providing those results) was associated with improved overall accuracy of manual chart review (98.90% vs 92.46%, p<0.001), decreased review duration per test case (62.43 vs 76.78s, p<0.001), and insignificantly reduced estimated marginal costs of identification of an erroneous electronic phenotyping result (
ieee international conference on healthcare informatics | 2014
Phillip B. Warner; Peter Mo; N. Dustin Schultz; Ramkiran Gouripeddi; Jeffrey Duncan; Julio C. Facelli
48.54 vs
american medical informatics association annual symposium | 2014
Polina V. Kukhareva; Kensaku Kawamoto; David E Shields; Darryl T Barfuss; Anne M Halley; Tyler J. Tippetts; Phillip B. Warner; Bruce E. Bray; Catherine J. Staes
63.56, p=0.16). The agreement between chart review and electronic phenotyping results was higher when the phenotyping results were provided (Cohens kappa 0.98 vs 0.88, p<0.001). As a result, while accuracy improved when initial electronic phenotyping results were correct (99.74% vs 92.67%, N=3049, p<0.001), there was a trend towards decreased accuracy when initial electronic phenotyping results were erroneous (56.67% vs 80.00%, N=55, p=0.07). Electronic phenotyping results provided the greatest benefit for the accurate identification of rare exclusion criteria. DISCUSSION Single-reviewer chart review of electronic phenotyping can be conducted more accurately, quickly, and at lower cost when supported by electronic phenotyping results. However, human reviewers tend to agree with electronic phenotyping results even when those results are wrong. Thus, the value of providing electronic phenotyping results depends on the accuracy of the underlying electronic phenotyping algorithm. CONCLUSION We recommend using a mix of phenotyping validation strategies, with the balance of strategies based on the anticipated electronic phenotyping error rate, the tolerance for missed electronic phenotyping errors, as well as the expertise, cost, and availability of personnel involved in chart review and discrepancy adjudication.
american medical informatics association annual symposium | 2012
Ramkiran Gouripeddi; Phillip B. Warner; Peter Mo; James E. Levin; Rajendu Srivastava; Samir S. Shah; David de Regt; Eric S. Kirkendall; Jonathan Bickel; E. Kent Korgenski; Michelle Precourt; Richard L. Stepanek; Joyce A. Mitchell; Scott P. Narus; Ron Keren
We present here the design, development and testing of an open-source software system supporting on-the-fly identity resolution, VIRGO: Virtual Identity Resolution on the Go. The system implements the open source Choice Maker algorithms and it was developed using a service oriented architecture (SOA) approach, which allows its use either as a standalone service or integrated in any SOA workflow. The system performance in our test case achieves the following merit figures, accuracy: 0.992, sensitivity: 0.981 and specificity: 0.992, and it shows linear scaling with the number of records considered. To demonstrate integration into a SOA framework we show how to incorporate VIRGO into the Open Further framework to perform record linkage when federating health records from multiple sources to identify cohorts for clinical research.
AMIA | 2013
Ramkiran Gouripeddi; Julio C. Facelli; Richard L. Bradshaw; Dustin Schultz; Bernie LaSalle; Phillip B. Warner; Ryan Butcher; Randy Madsen; Peter Mo
AMIA | 2015
Polina V. Kukhareva; Catherine J. Staes; Tyler J. Tippetts; Phillip B. Warner; David Shields; Heather Mueller; Kevin W. Noonan; Kensaku Kawamoto
american medical informatics association annual symposium | 2015
Tyler J. Tippetts; Phillip B. Warner; Polina V. Kukhareva; David E Shields; Catherine J. Staes; Kensaku Kawamoto
AMIA | 2015
Salvador Rodriguez-Loya; Emory Fry; Tadesse Sefer; Phillip B. Warner; Claude J. Nanjo; Jerry Goodnough; David Shields; Steven Elliott; Esteban Aliverti; Kensaku Kawamoto
Archive | 2014
Ram Gouripeddi; Ryan Butcher; Phillip B. Warner; Peter Mo
AMIA | 2014
Tyler J. Tippetts; Phillip B. Warner; David Shields; Salvador Rodriguez-Loya; Catherine J. Staes; Kensaku Kawamoto