Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vinod Kaggal is active.

Publication


Featured researches published by Vinod Kaggal.


Journal of the American Medical Informatics Association | 2013

Normalization and standardization of electronic health records for high-throughput phenotyping: the SHARPn consortium

Jyotishman Pathak; Kent R. Bailey; Calvin Beebe; Steven Bethard; David Carrell; Pei J. Chen; Dmitriy Dligach; Cory M. Endle; Lacey Hart; Peter J. Haug; Stanley M. Huff; Vinod Kaggal; Dingcheng Li; Hongfang D Liu; Kyle Marchant; James J. Masanz; Timothy A. Miller; Thomas A. Oniki; Martha Palmer; Kevin J. Peterson; Susan Rea; Guergana Savova; Craig Stancl; Sunghwan Sohn; Harold R. Solbrig; Dale Suesse; Cui Tao; David P. Taylor; Les Westberg; Stephen T. Wu

RESEARCH OBJECTIVE To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. MATERIALS AND METHODS Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems-Mayo Clinic and Intermountain Healthcare-were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. RESULTS Using CEMs and open-source natural language processing and terminology services engines-namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)-we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. CONCLUSIONS End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts.


Journal of Biomedical Semantics | 2013

A common type system for clinical natural language processing

Stephen T. Wu; Vinod Kaggal; Dmitriy Dligach; James J. Masanz; Pei Chen; Lee Becker; Wendy W. Chapman; Guergana Savova; Hongfang Liu; Christopher G. Chute

BackgroundOne challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings.ResultsWe describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later.ConclusionsWe have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.


Biomedical Informatics Insights | 2016

Toward a Learning Health-care System – Knowledge Delivery at the Point of Care Empowered by Big Data and NLP

Vinod Kaggal; Ravikumar Komandur Elayavilli; Saeed Mehrabi; Joshua J. Pankratz; Sunghwan Sohn; Yanshan Wang; Dingcheng Li; Majid Mojarad Rastegar; Sean P. Murphy; Jason L. Ross; Rajeev Chaudhry; James D. Buntrock; Hongfang Liu

The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future.


ieee international conference on healthcare informatics, imaging and systems biology | 2012

Clinical Decision Support for Colonoscopy Surveillance Using Natural Language Processing

Kavishwar B. Wagholikar; Sunghwan Sohn; Stephen T. Wu; Vinod Kaggal; Sheila Buehler; Robert A. Greenes; Tsung-Teh Wu; David W. Larson; Hongfang Liu; Rajeev Chaudhry; Lisa A. Boardman

Colorectal cancer is the second leading cause of cancer-related deaths in the United States. However, 41% of patients do not receive adequate screening, since the surveillance guidelines for colonoscopy are complex and are not easily recalled by health care providers. As a potential solution, we developed a guideline based clinical decision support system (CDSS) that can interpret relevant freetext reports including indications, pathology and procedure notes. The CDSS was evaluated by comparing its recommendations with those of a gastroenterologist for a test set of 53 patients. The CDSS made the optimal recommendation in 48 cases, and helped the gastroenterologist revise the recommendation in 3 cases. We performed an error analysis for the 5 failure cases, and subsequently were able to modify the CDSS to output the correct recommendation for all the test cases. Results indicate that the system has a high potential for clinical deployment, but further evaluation and optimization is required. Limitations of our study are that the study was conducted at a single institution and with a single expert, and the evaluation did not include rare decision scenarios. Overall our work demonstrates the utility of natural language processing to enhance clinical decision support.


American Journal of Public Health | 2013

Tracking Health Disparities Through Natural-Language Processing

Mark L. Wieland; Stephen T. Wu; Vinod Kaggal; Barbara P. Yawn

Health disparities and solutions are heterogeneous within and among racial and ethnic groups, yet existing administrative databases lack the granularity to reflect important sociocultural distinctions. We measured the efficacy of a natural-language-processing algorithm to identify a specific immigrant group. The algorithm demonstrated accuracy and precision in identifying Somali patients from the electronic medical records at a single institution. This technology holds promise to identify and track immigrants and refugees in the United States in local health care settings.


conference on information and knowledge management | 2011

Generality and reuse in a common type system for clinical natural language processing

Stephen T. Wu; Vinod Kaggal; Guergana Savova; Hongfang Liu; Jiaping Zheng; Wendy W. Chapman; Christopher G. Chute; Dmitriy Dligach

The aim of Area 4 of the Strategic Healthcare IT Advanced Research Project (SHARP 4) is to facilitate secondary use of data stored in Electronic Medical Records (EMR) through high throughput phenotyping. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to standard representation that is comparable and interoperable. To meet the NLP requirement of different secondary use cases of EMR, accommodate different NLP approaches, enable the interoperability between structured and unstructured data generated in different clinical settings, we define a common type system for clinical NLP that integrates a comprehensive model of clinical semantics with language processing types for SHARP 4. The type system has been implemented in UIMA (Unstructured Information Management Architecture), which allows for flexible passing of input and output data types among NLP components, and is available at the SHARP 4 website.


language resources and evaluation | 2008

System Evaluation on a Named Entity Corpus from Clinical Notes.

Karin Schuler; Vinod Kaggal; James J. Masanz; Philip V. Ogren; Guergana Savova


Journal of the American Medical Informatics Association | 2016

Clinical element models in the SHARPn consortium

Thomas A. Oniki; Ning Zhuo; Calvin Beebe; Hongfang Liu; Joseph F. Coyle; Craig G. Parker; Harold R. Solbrig; Kyle Marchant; Vinod Kaggal; Christopher G. Chute; Stanley M. Huff


text retrieval conference | 2011

Empirical Ontologies for Cohort Identification.

Stephen T. Wu; Kavishwar B. Wagholikar; Sunghwan Sohn; Vinod Kaggal; Hongfang Liu


Journal of the American College of Cardiology | 2018

AUTOMATED DATA EXTRACTION FROM ELECTRONIC HEALTH RECORDS TO CREATE NOVEL PROGNOSTIC MODELS FOR PERIPHERAL ARTERY DISEASE

Adelaide M. Arruda-Olson; Naveed Afzal; Vishnu Priya Mallipeddi; Ahmad Said; Homam Moussa Pacha; Alisha P. Chaudhry; Christopher B. Scott; Kent R. Bailey; Thom W. Rooke; Paul Wennberg; Vinod Kaggal; Iftikhar J. Kullo; Rajeev Chaudhry; Hongfang Liu

Collaboration


Dive into the Vinod Kaggal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guergana Savova

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dmitriy Dligach

Loyola University Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge