Vojtech Huser
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vojtech Huser.
Studies in health technology and informatics | 2015
George Hripcsak; Jon D. Duke; Nigam H. Shah; Christian G. Reich; Vojtech Huser; Martijn J. Schuemie; Marc A. Suchard; Rae Woong Park; Ian C. K. Wong; Peter R. Rijnbeek; Johan van der Lei; Nicole L. Pratt; G. Niklas Norén; Yu Chuan Li; Paul E. Stang; David Madigan; Patrick B. Ryan
The vision of creating accessible, reliable clinical evidence by accessing the clincial experience of hundreds of millions of patients across the globe is a reality. Observational Health Data Sciences and Informatics (OHDSI) has built on learnings from the Observational Medical Outcomes Partnership to turn methods research and insights into a suite of applications and exploration tools that move the field closer to the ultimate goal of generating evidence about all aspects of healthcare to serve the needs of patients, clinicians and all other decision-makers around the world.
Journal of the American Medical Informatics Association | 2013
Vojtech Huser; James J. Cimino
OBJECTIVE To determine whether two specific criteria in Uniform Requirements for Manuscripts (URM) created by the International Committee of Medical Journal Editors (ICMJE)--namely, including the trial ID registration within manuscripts and timely registration of trials, are being followed. MATERIALS AND METHODS Observational study using computerized analysis of publicly available Medline article data and clinical trial registry data. We analyzed a purposive set of five ICMJE founding journals looking at all trial articles published in those journals during 2010-2011, and data from the ClinicalTrials.gov (CTG) trial registry. We measured adherence to trial ID inclusion policy as the percentage of trial journal articles that contained a valid trial ID within the article (journal-based sample). Adherence to timely registration was measured as the percentage of trials that registered the trial before enrolling the first participant within a 60-day grace period. We also examined timely registration rates by year of all phase II and higher interventional trials in CTG (registry-based sample). RESULTS To determine trial ID inclusion, we analyzed 698 clinical trial articles in five journals. A total of 95.8% (661/690) of trial journal articles included the trial ID. In 88.3% the trial-article link is stored within a structured Medline field. To evaluate timely registration, we analyzed trials referenced by 451 articles from the selected five journals. A total of 60% (272/451) of articles were registered in a timely manner with an improving trend for trials initiated in later years (eg, 89% of trials that began in 2008 were registered in a timely manner). In the registry-based sample, the timely registration rates ranged from 56% for trials registered in 2006 to 72% for trials registered in 2011. DISCUSSION Adherence to URM requirements for registration and trial ID inclusion increases the utility of PubMed and links it in an important way to clinical trial repositories. This new integrated knowledge source can facilitate research prioritization, clinical guidelines creation, and precision medicine. CONCLUSIONS The five selected journals adhere well to the policy of mandatory trial registration and also outperform the registry in adherence to timely registration. ICMJEs URM policy represents a unique international mandate that may be providing a powerful incentive for sponsors and investigators to document clinical trials and trial result publications and thus fulfill important obligations to trial participants and society.
Proceedings of the National Academy of Sciences of the United States of America | 2016
George Hripcsak; Patrick B. Ryan; Jon D. Duke; Nigam H. Shah; Rae Woong Park; Vojtech Huser; Marc A. Suchard; Martijn J. Schuemie; Frank J. DeFalco; Adler J. Perotte; Juan M. Banda; Christian G. Reich; Lisa M. Schilling; Michael E. Matheny; Daniella Meeker; Nicole L. Pratt; David Madigan
Observational research promises to complement experimental research by providing large, diverse populations that would be infeasible for an experiment. Observational research can test its own clinical hypotheses, and observational studies also can contribute to the design of experiments and inform the generalizability of experimental research. Understanding the diversity of populations and the variance in care is one component. In this study, the Observational Health Data Sciences and Informatics (OHDSI) collaboration created an international data network with 11 data sources from four countries, including electronic health records and administrative claims data on 250 million patients. All data were mapped to common data standards, patient privacy was maintained by using a distributed model, and results were aggregated centrally. Treatment pathways were elucidated for type 2 diabetes mellitus, hypertension, and depression. The pathways revealed that the world is moving toward more consistent therapy over time across diseases and across locations, but significant heterogeneity remains among sources, pointing to challenges in generalizing clinical trial results. Diabetes favored a single first-line medication, metformin, to a much greater extent than hypertension or depression. About 10% of diabetes and depression patients and almost 25% of hypertension patients followed a treatment pathway that was unique within the cohort. Aside from factors such as sample size and underlying population (academic medical center versus general population), electronic health records data and administrative claims data revealed similar results. Large-scale international observational research is feasible.
Drug Safety | 2014
Richard D. Boyce; Patrick B. Ryan; G. Niklas Norén; Martijn J. Schuemie; Christian G. Reich; Jon D. Duke; Nicholas P. Tatonetti; Gianluca Trifirò; Rave Harpaz; J. Marc Overhage; Abraham G. Hartzema; Mark Khayter; Erica A. Voss; Christophe G. Lambert; Vojtech Huser; Michel Dumontier
The entire drug safety enterprise has a need to search, retrieve, evaluate, and synthesize scientific evidence more efficiently. This discovery and synthesis process would be greatly accelerated through access to a common framework that brings all relevant information sources together within a standardized structure. This presents an opportunity to establish an open-source community effort to develop a global knowledge base, one that brings together and standardizes all available information for all drugs and all health outcomes of interest (HOIs) from all electronic sources pertinent to drug safety. To make this vision a reality, we have established a workgroup within the Observational Health Data Sciences and Informatics (OHDSI, http://ohdsi.org) collaborative. The workgroup’s mission is to develop an open-source standardized knowledge base for the effects of medical products and an efficient procedure for maintaining and expanding it. The knowledge base will make it simpler for practitioners to access, retrieve, and synthesize evidence so that they can reach a rigorous and accurate assessment of causal relationships between a given drug and HOI. Development of the knowledge base will proceed with the measureable goal of supporting an efficient and thorough evidence-based assessment of the effects of 1,000 active ingredients across 100 HOIs. This non-trivial task will result in a high-quality and generally applicable drug safety knowledge base. It will also yield a reference standard of drug–HOI pairs that will enable more advanced methodological research that empirically evaluates the performance of drug safety analysis methods.
Journal of Medical Systems | 2014
Salvador Rodriguez Loya; Kensaku Kawamoto; Chris Chatwin; Vojtech Huser
The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems’ performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS
Pharmacogenomics and Personalized Medicine | 2014
Vojtech Huser; Murat Sincan; James J. Cimino
Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.
International Journal of Radiation Oncology Biology Physics | 2016
Stanley H. Benedict; Karen E. Hoffman; Mary K. Martel; Amy P. Abernethy; Anthony L. Asher; Jacek Capala; Ronald C. Chen; B.S. Chera; Jennifer Couch; James A. Deye; Jason A. Efstathiou; Eric C. Ford; Benedick A. Fraass; Peter Gabriel; Vojtech Huser; Brian D. Kavanagh; Deepak Khuntia; Lawrence B. Marks; Charles Mayo; T.R. McNutt; Robert S. Miller; K Moore; Fred W. Prior; Erik Roelofs; Barry S. Rosenstein; Jeff A. Sloan; Anna Theriault; Bhadrasain Vikram
Big data research refers to the collection and analysis of large sets of data elements and interrelationships that are difficult to process with traditional methods. It can be considered a subspecialty of the medical informatics domain under data science and analytics. This approach has been used in many areas of medicine to address topics such as clinical care and quality assessment (1–3). The need for informatics research in radiation oncology emerged as an important initiative during the 2013 National Institutes of Health (NIH)–National Cancer Institute (NCI), American Society for Radiation Oncology (ASTRO), and American Association of Physicists in Medicine (AAPM) workshop on the topic “Technology for Innovation in Radiation Oncology” (4). Our existing clinical practice generates discrete, quantitative, and structured patient-specific data (eg, images, doses, and volumes) that position us well to exploit and participate in big data initiatives. The well-established electronic infrastructure within radiation oncology should facilitate the retrieval and aggregation of much of the needed data. With additional efforts to integrate structured data collection of patient outcomes and assessments into the clinical workflow, the field of radiation oncology has a tremendous opportunity to generate large, comprehensive patient-specific data sets (5). However, there are major challenges to realizing this goal. For example, existing data are presently housed across different platforms at multiple institutions and are often not stored in a standardized manner or with common terminologies to enable pooling of data. In addition, many important data elements are not routinely discretely captured in clinical practice. There are cultural, structural, and logistical challenges (eg, computer compatibility and workflow demands) that will make the dream of big data research difficult. The big data research workshop provided a forum for leaders in cancer registries, incident report quality-assurance systems, radiogenomics, ontology of oncology, and a wide range of ongoing big data and cloud computing development projects to interact with peers in radiation oncology to develop strategies to harness data for research, quality assessment, and clinical care. The workshop provided a platform to discuss items such as data capture, data infrastructure, and protection of patient confidentiality and to improve awareness of the wide-ranging opportunities in radiation oncology, as well as to enhance the potential for research and collaboration opportunities with NIH on big data initiatives. The goals of the workshop were as follows: To discuss current and future sources of big data for use in radiation oncology research, To identify ways to improve our current data collection methods by adopting new strategies used in fields outside of radiation oncology, and To consider what new knowledge and solutions big data research can provide for clinical decision support for personalized medicine.
International Journal of Radiation Oncology Biology Physics | 2016
Vojtech Huser; James J. Cimino
Advances in data storage and data analysis materialized also in health care data. In recent years, we have seen an emphasis on using the full potential (1, 2) of these data to answer questions such as: who were the patients that received radiation therapy as primary treatment? Who among such patients experienced radiation therapyerelated complications? Given everything you know about my case, what is the chance that if I choose radiation therapy, I will experience incontinence in the next year? Factors contributing to this trend include more rapid data querying technologies, cheaper data storage, addition of genomic data to traditional clinical data sets, “meaningful use incentives” for increasing the adoption of electronic health records, and recent emergence of precision medicine (3). In this perspective paper, we discuss several challenges ahead for big data, some that are being addressed now and others that will need to be addressed in the near future. The list of challenges presented here is not meant to be an exhaustive list but is rather driven by our big data experience. For each challenge, we provide comments on current approaches to address the challenge.
Journal of Biomedical Semantics | 2017
Richard D. Boyce; Erica A. Voss; Vojtech Huser; Lee Evans; Christian G. Reich; Jon D. Duke; Nicholas P. Tatonetti; Tal Lorberbaum; Michel Dumontier; Manfred Hauben; Magnus Wallberg; Lili Peng; Sara Dempster; Yongqun He; Anthony G. Sena; Vassilis Koutkias; Pantelis Natsiavas; Patrick B. Ryan
BackgroundIntegrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings.ResultsLAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources.ConclusionsThe prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations.Background Integrating multiple sources of pharmacovigilance evidence has the potential to advance the science of safety signal detection and evaluation. In this regard, there is a need for more research on how to integrate multiple disparate evidence sources while making the evidence computable from a knowledge representation perspective (i.e., semantic enrichment). Existing frameworks suggest well-promising outcomes for such integration but employ a rather limited number of sources. In particular, none have been specifically designed to support both regulatory and clinical use cases, nor have any been designed to add new resources and use cases through an open architecture. This paper discusses the architecture and functionality of a system called Large-scale Adverse Effects Related to Treatment Evidence Standardization (LAERTES) that aims to address these shortcomings. Results LAERTES provides a standardized, open, and scalable architecture for linking evidence sources relevant to the association of drugs with health outcomes of interest (HOIs). Standard terminologies are used to represent different entities. For example, drugs and HOIs are represented in RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms respectively. At the time of this writing, six evidence sources have been loaded into the LAERTES evidence base and are accessible through prototype evidence exploration user interface and a set of Web application programming interface services. This system operates within a larger software stack provided by the Observational Health Data Sciences and Informatics clinical research framework, including the relational Common Data Model for observational patient data created by the Observational Medical Outcomes Partnership. Elements of the Linked Data paradigm facilitate the systematic and scalable integration of relevant evidence sources. Conclusions The prototype LAERTES system provides useful functionality while creating opportunities for further research. Future work will involve improving the method for normalizing drug and HOI concepts across the integrated sources, aggregated evidence at different levels of a hierarchy of HOI concepts, and developing more advanced user interface for drug-HOI investigations.
eGEMs (Generating Evidence & Methods to improve patient outcomes) | 2016
Vojtech Huser; Frank J. DeFalco; Martijn J. Schuemie; Patrick B. Ryan; Ning Shang; Mark Velez; Rae Woong Park; Richard D. Boyce; Jon D. Duke; Ritu Khare; Levon Utidjian; Charles Bailey
Introduction: Data quality and fitness for analysis are crucial if outputs of analyses of electronic health record data or administrative claims data should be trusted by the public and the research community. Methods: We describe a data quality analysis tool (called Achilles Heel) developed by the Observational Health Data Sciences and Informatics Collaborative (OHDSI) and compare outputs from this tool as it was applied to 24 large healthcare datasets across seven different organizations. Results: We highlight 12 data quality rules that identified issues in at least 10 of the 24 datasets and provide a full set of 71 rules identified in at least one dataset. Achilles Heel is a freely available software that provides a useful starter set of data quality rules with the ability to add additional rules. We also present results of a structured email-based interview of all participating sites that collected qualitative comments about the value of Achilles Heel for data quality evaluation. Discussion: Our analysis represents the first comparison of outputs from a data quality tool that implements a fixed (but extensible) set of data quality rules. Thanks to a common data model, we were able to compare quickly multiple datasets originating from several countries in America, Europe and Asia.