Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shaun J. Grannis is active.

Publication


Featured researches published by Shaun J. Grannis.


Journal of the American Medical Informatics Association | 2003

Implementing Syndromic Surveillance: A Practical Guide Informed by the Early Experience

Kenneth D. Mandl; J. Marc Overhage; Michael M. Wagner; William B. Lober; Paola Sebastiani; Farzad Mostashari; Julie A. Pavlin; Per H. Gesteland; Tracee A. Treadwell; Eileen Koski; Lori Hutwagner; David L. Buckeridge; Raymond D. Aller; Shaun J. Grannis

Syndromic surveillance refers to methods relying on detection of individual and population health indicators that are discernible before confirmed diagnoses are made. In particular, prior to the laboratory confirmation of an infectious disease, ill persons may exhibit behavioral patterns, symptoms, signs, or laboratory findings that can be tracked through a variety of data sources. Syndromic surveillance systems are being developed locally, regionally, and nationally. The efforts have been largely directed at facilitating the early detection of a covert bioterrorist attack, but the technology may also be useful for general public health, clinical medicine, quality improvement, patient safety, and research. This report, authored by developers and methodologists involved in the design and deployment of the first wave of syndromic surveillance systems, is intended to serve as a guide for informaticians, public health managers, and practitioners who are currently planning deployment of such systems in their regions.


American Journal of Public Health | 2008

A Comparison of the Completeness and Timeliness of Automated Electronic Laboratory Reporting and Spontaneous Reporting of Notifiable Conditions

J. Marc Overhage; Shaun J. Grannis; Clement J. McDonald

OBJECTIVES We examined whether automated electronic laboratory reporting of notifiable-diseases results in information being delivered to public health departments more completely and quickly than is the case with spontaneous, paper-based reporting. METHODS We used data from a local public health department, hospital infection control departments, and a community-wide health information exchange to identify all potential cases of notifiable conditions that occurred in Marion County, Ind, during the first quarter of 2001. We compared traditional spontaneous reporting to the health department with automated electronic laboratory reporting through the health information exchange. RESULTS After reports obtained using the 2 methods had been matched, there were 4785 unique reports for 53 different conditions during the study period. Chlamydia was the most common condition, followed by hepatitis B, hepatitis C, and gonorrhea. Automated electronic laboratory reporting identified 4.4 times as many cases as traditional spontaneous, paper-based methods and identified those cases 7.9 days earlier than spontaneous reporting. CONCLUSIONS Automated electronic laboratory reporting improves the completeness and timeliness of disease surveillance, which will enhance public health awareness and reporting efficiency.


IEEE Transactions on Visualization and Computer Graphics | 2010

A Visual Analytics Approach to Understanding Spatiotemporal Hotspots

Ross Maciejewski; Stephen Rudolph; Ryan P. Hafen; Ahmad M. Abusalah; Mohamed Yakout; Mourad Ouzzani; William S. Cleveland; Shaun J. Grannis; David S. Ebert

As data sources become larger and more complex, the ability to effectively explore and analyze patterns among varying sources becomes a critical bottleneck in analytic reasoning. Incoming data contain multiple variables, high signal-to-noise ratio, and a degree of uncertainty, all of which hinder exploration, hypothesis generation/exploration, and decision making. To facilitate the exploration of such data, advanced tool sets are needed that allow the user to interact with their data in a visual environment that provides direct analytic capability for finding data aberrations or hotspots. In this paper, we present a suite of tools designed to facilitate the exploration of spatiotemporal data sets. Our system allows users to search for hotspots in both space and time, combining linked views and interactive filtering to provide users with contextual information about their data and allow the user to develop and explore their hypotheses. Statistical data models and alert detection algorithms are provided to help draw user attention to critical areas. Demographic filtering can then be further applied as hypotheses generated become fine tuned. This paper demonstrates the use of such tools on multiple geospatiotemporal data sets.


Journal of the American Medical Informatics Association | 2006

A Context-sensitive Approach to Anonymizing Spatial Surveillance Data: Impact on Outbreak Detection

Christopher A. Cassa; Shaun J. Grannis; J. Marc Overhage; Kenneth D. Mandl

OBJECTIVE The use of spatially based methods and algorithms in epidemiology and surveillance presents privacy challenges for researchers and public health agencies. We describe a novel method for anonymizing individuals in public health data sets by transposing their spatial locations through a process informed by the underlying population density. Further, we measure the impact of the skew on detection of spatial clustering as measured by a spatial scanning statistic. DESIGN Cases were emergency department (ED) visits for respiratory illness. Baseline ED visit data were injected with artificially created clusters ranging in magnitude, shape, and location. The geocoded locations were then transformed using a de-identification algorithm that accounts for the local underlying population density. MEASUREMENTS A total of 12,600 separate weeks of case data with artificially created clusters were combined with control data and the impact on detection of spatial clustering identified by a spatial scan statistic was measured. RESULTS The anonymization algorithm produced an expected skew of cases that resulted in high values of data set k-anonymity. De-identification that moves points an average distance of 0.25 km lowers the spatial cluster detection sensitivity by less than 4% and lowers the detection specificity less than 1%. CONCLUSION A population-density-based Gaussian spatial blurring markedly decreases the ability to identify individuals in a data set while only slightly decreasing the performance of a standardly used outbreak detection tool. These findings suggest new approaches to anonymizing data for spatial epidemiology and surveillance.


Journal of Public Health Management and Practice | 2004

The Indiana network for patient care: an integrated clinical information system informed by over thirty years of experience.

Paul G. Biondich; Shaun J. Grannis

Presented in this article is the Indiana Network for Patient Care, an integrated citywide medical record system that promotes health quality by enabling efficient access to clinical information. It begins with a description of the systems infrastructure, which includes an explanation of how the system accomplishes data integration. This is followed by a series of descriptions and rationales behind the many clinical applications that interface these data. In doing so, some of the factors that we feel contribute to the success of the system are illustrated.


Studies in health technology and informatics | 2004

Real World Performance of Approximate String Comparators for use in Patient Matching

Shaun J. Grannis; J. Marc Overhage; Clement J. McDonald

Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.


Public Health Reports | 2011

Incorporating Geospatial Capacity within Clinical Data Systems to Address Social Determinants of Health

Karen Frederickson Comer; Shaun J. Grannis; Brian E. Dixon; David J. Bodenhamer; Sarah E. Wiehe

Linking electronic health record (EHR) systems with community information systems (CIS) holds great promise for addressing inequities in social determinants of health (SDH). While EHRs are rich in location-specific data that allow us to uncover geographic inequities in health outcomes, CIS are rich in data that allow us to describe community-level characteristics relating to health. When meaningfully integrated, these data systems enable clinicians, researchers, and public health professionals to actively address the social etiologies of health disparities. This article describes a process for exploring SDH by geocoding and integrating EHR data with a comprehensive CIS covering a large metropolitan area. Because the systems were initially designed for different purposes and had different teams of experts involved in their development, integrating them presents challenges that require multidisciplinary expertise in informatics, geography, public health, and medicine. We identify these challenges and the means of addressing them and discuss the significance of the project as a model for similar projects.


Journal of the American Medical Informatics Association | 2009

An Empiric Modification to the Probabilistic Record Linkage Algorithm Using Frequency-Based Weight Scaling

Vivienne J. Zhu; Marc Overhage; James Egg; Stephen M. Downs; Shaun J. Grannis

OBJECTIVE To incorporate value-based weight scaling into the Fellegi-Sunter (F-S) maximum likelihood linkage algorithm and evaluate the performance of the modified algorithm. Background Because healthcare data are fragmented across many healthcare systems, record linkage is a key component of fully functional health information exchanges. Probabilistic linkage methods produce more accurate, dynamic, and robust matching results than rule-based approaches, particularly when matching patient records that lack unique identifiers. Theoretically, the relative frequency of specific data elements can enhance the F-S method, including minimizing the false-positive or false-negative matches. However, to our knowledge, no frequency-based weight scaling modification to the F-S method has been implemented and specifically evaluated using real-world clinical data. METHODS The authors implemented a value-based weight scaling modification using an information theoretical model, and formally evaluated the effectiveness of this modification by linking 51,361 records from Indiana statewide newborn screening data to 80,089 HL7 registration messages from the Indiana Network for Patient Care, an operational health information exchange. In addition to applying the weight scaling modification to all fields, we examined the effect of selectively scaling common or uncommon field-specific values. RESULTS The sensitivity, specificity, and positive predictive value for applying weight scaling to all field-specific values were 95.4, 98.8, and 99.9%, respectively. Compared with nonweight scaling, the modified F-S algorithm demonstrated a 10% increase in specificity with a 3% decrease in sensitivity. CONCLUSION By eliminating false-positive matches, the value-based weight modification can enhance the specificity of the F-S method with minimal decrease in sensitivity.


BMC Medical Informatics and Decision Making | 2009

Syndromic surveillance: STL for modeling, visualizing, and monitoring disease counts

Ryan P. Hafen; David Anderson; William S. Cleveland; Ross Maciejewski; David S. Ebert; Ahmad M. Abusalah; Mohamed Yakout; Mourad Ouzzani; Shaun J. Grannis

BackgroundPublic health surveillance is the monitoring of data to detect and quantify unusual health events. Monitoring pre-diagnostic data, such as emergency department (ED) patient chief complaints, enables rapid detection of disease outbreaks. There are many sources of variation in such data; statistical methods need to accurately model them as a basis for timely and accurate disease outbreak methods.MethodsOur new methods for modeling daily chief complaint counts are based on a seasonal-trend decomposition procedure based on loess (STL) and were developed using data from the 76 EDs of the Indiana surveillance program from 2004 to 2008. Square root counts are decomposed into inter-annual, yearly-seasonal, day-of-the-week, and random-error components. Using this decomposition method, we develop a new synoptic-scale (days to weeks) outbreak detection method and carry out a simulation study to compare detection performance to four well-known methods for nine outbreak scenarios.ResultThe components of the STL decomposition reveal insights into the variability of the Indiana ED data. Day-of-the-week components tend to peak Sunday or Monday, fall steadily to a minimum Thursday or Friday, and then rise to the peak. Yearly-seasonal components show seasonal influenza, some with bimodal peaks.Some inter-annual components increase slightly due to increasing patient populations. A new outbreak detection method based on the decomposition modeling performs well with 90 days or more of data. Control limits were set empirically so that all methods had a specificity of 97%. STL had the largest sensitivity in all nine outbreak scenarios. The STL method also exhibited a well-behaved false positive rate when run on the data with no outbreaks injected.ConclusionThe STL decomposition method for chief complaint counts leads to a rapid and accurate detection method for disease outbreaks, and requires only 90 days of historical data to be put into operation. The visualization tools that accompany the decomposition and outbreak methods provide much insight into patterns in the data, which is useful for surveillance operations.


Journal of Biomedical Informatics | 2014

The long road to semantic interoperability in support of public health

Brian E. Dixon; Daniel J. Vreeman; Shaun J. Grannis

Proliferation of health information technologies creates opportunities to improve clinical and public health, including high quality, safer care and lower costs. To maximize such potential benefits, health information technologies must readily and reliably exchange information with other systems. However, evidence from public health surveillance programs in two states suggests that operational clinical information systems often fail to use available standards, a barrier to semantic interoperability. Furthermore, analysis of existing policies incentivizing semantic interoperability suggests they have limited impact and are fragmented. In this essay, we discuss three approaches for increasing semantic interoperability to support national goals for using health information technologies. A clear, comprehensive strategy requiring collaborative efforts by clinical and public health stakeholders is suggested as a guide for the long road towards better population health data and outcomes.

Collaboration


Dive into the Shaun J. Grannis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Debra Revere

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge