Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meredith Nahm is active.

Publication


Featured researches published by Meredith Nahm.


PLOS ONE | 2008

Quantifying data quality for clinical trials using electronic data capture.

Meredith Nahm; Carl F. Pieper; Maureen M. Cunningham

Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.


Academic Medicine | 2009

Synergies and Distinctions between Computational Disciplines in Biomedical Research: Perspective from the Clinical and Translational Science Award Programs

Elmer V. Bernstam; William R. Hersh; Stephen B. Johnson; Christopher G. Chute; Hien H. Nguyen; Ida Sim; Meredith Nahm; Mark G. Weiner; Perry L. Miller; Robert P. DiLaura; Marc Overcash; Harold P. Lehmann; David Eichmann; Brian D. Athey; Richard H. Scheuermann; Nicholas R. Anderson; Justin Starren; Paul A. Harris; Jack W. Smith; Ed Barbour; Jonathan C. Silverstein; David A. Krusch; Rakesh Nagarajan; Michael J. Becich

Clinical and translational research increasingly requires computation. Projects may involve multiple computationally oriented groups including information technology (IT) professionals, computer scientists, and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays, and suboptimal results. Although written from the perspective of Clinical and Translational Science Award (CTSA) programs within academic medical centers, this article addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science, and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information, and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers.


Clinical Trials | 2009

What can we learn from a decade of database audits? The Duke Clinical Research Institute experience, 1997--2006.

Reza Rostami; Meredith Nahm; Carl F. Pieper

Background Despite a pressing and well-documented need for better sharing of information on clinical trials data quality assurance methods, many research organizations remain reluctant to publish descriptions of and results from their internal auditing and quality assessment methods. Purpose We present findings from a review of a decade of internal data quality audits performed at the Duke Clinical Research Institute, a large academic research organization that conducts data management for a diverse array of clinical studies, both academic and industry-sponsored. In so doing, we hope to stimulate discussions that could benefit the wider clinical research enterprise by providing insight into methods of optimizing data collection and cleaning, ultimately helping patients and furthering essential research. Methods We present our audit methodologies, including sampling methods, audit logistics, sample sizes, counting rules used for error rate calculations, and characteristics of audited trials. We also present database error rates as computed according to two analytical methods, which we address in detail, and discuss the advantages and drawbacks of two auditing methods used during this 10-year period. Results Our review of the DCRI audit program indicates that higher data quality may be achieved from a series of small audits throughout the trial rather than through a single large database audit at database lock. We found that error rates trended upward from year to year in the period characterized by traditional audits performed at database lock (1997—2000), but consistently trended downward after periodic statistical process control type audits were instituted (2001—2006). These increases in data quality were also associated with cost savings in auditing, estimated at 1000 h per year, or the efforts of one-half of a full time equivalent (FTE). Limitations Our findings are drawn from retrospective analyses and are not the result of controlled experiments, and may therefore be subject to unanticipated confounding. In addition, the scope and type of audits we examine here are specific to our institution, and our results may not be broadly generalizable. Conclusions Use of statistical process control methodologies may afford advantages over more traditional auditing methods, and further research will be necessary to confirm the reliability and usability of such techniques. We believe that open and candid discussion of data quality assurance issues among academic and clinical research organizations will ultimately benefit the entire research community in the coming era of increased data sharing and re-use. Clinical Trials 2009; 6: 141—150. http://ctj.sagepub.com


Drug Information Journal | 2007

Data Standards: At the Intersection of Sites, Clinical Research Networks, and Standards Development Initiatives:

Brian McCourt; Robert A. Harrington; Kathleen Fox; Carol D. Hamilton; Kimberly Booher; William E. Hammond; Anita Walden; Meredith Nahm

Interactions between the health care and clinical research communities are currently inefficient. The present environment forces unnecessary redundancy, from the capture of patient data in the clinician-patient encounter to multiple uses of thai data. Clinical research operations must become more integrated with health care processes to improve efficiencies in both patient care and research. Achieving a single instance of data capture to serve the combined needs of both environments should facilitate translation of knowledge from research into better patient care. A critical first step in achieving true interoperability is to develop formal data standards that are then adopted by the larger health care and research communities. The rewards of interoperability include streamlined subject screening and enrollment procedures, improved reporting, merging and subsequent analysis of clinical data sets, and expansion of knowledge made possible by leveraging research data and results from other domains in the health care community—all of which would increase the quality of patient care. The aggregation of data across multiple sites and the subsequent reuse of that data do face challenges, particularly in ensuring patient privacy; however, these can be overcome by technological innovation and consensus-building among stakeholders.


Archive | 2012

Data Quality in Clinical Research

Meredith Nahm

Every scientist knows that research results are only as good as the data upon which the conclusions were formed. However, most scientists receive no training in methods for achieving, assessing, or controlling the quality of research data—topics central to clinical research informatics. This chapter covers the basics of collect and process research data given the available data sources, systems, and people. Data quality dimensions specific to the clinical research context are used, and a framework for data quality practice and planning is developed. Available research is summarized, providing estimates of data quality capability for common clinical research data collection and processing methods. This chapter provides researchers, informaticists, and clinical research data managers basic tools to plan, achieve, and control the quality of research data.


Journal of Biomedical Informatics | 2009

Operationalization of the UFuRT methodology for usability analysis in the clinical research data management domain

Meredith Nahm; Jiajie Zhang

Data management software applications specifically designed for the clinical research environment are increasingly available from commercial vendors and open-source communities, however, general-purpose spreadsheets remain widely employed in clinical research data management (CRDM). The suitability of spreadsheets for this use is controversial, and no formal comparative usability evaluations have been performed. We report on an application of the UFuRT (user, function, representation, and task (analyses) methodology to create a domain-specific process for usability evaluation. We demonstrate this process in an evaluation of differences in usability between a spreadsheet program (Microsoft Excel) and a commercially available clinical research data management system (Phase Forward Clintrial). Through this domain-specific operationalization of UFuRT methodology, we successfully identified usability differences and quantified task and cost differences, while differentiating these from socio-technical aspects. UFuRT can be generalized to other domains.


Journal of Medical Systems | 2011

Impact of the Patient-Reported Outcomes Management Information System (PROMIS) upon the Design and Operation of Multi-center Clinical Trials: a Qualitative Research Study

Eric L. Eisenstein; Lawrence W. Diener; Meredith Nahm; Kevin P. Weinfurt

New technologies may be required to integrate the National Institutes of Health’s Patient Reported Outcome Management Information System (PROMIS) into multi-center clinical trials. To better understand this need, we identified likely PROMIS reporting formats, developed a multi-center clinical trial process model, and identified gaps between current capabilities and those necessary for PROMIS. These results were evaluated by key trial constituencies. Issues reported by principal investigators fell into two categories: acceptance by key regulators and the scientific community, and usability for researchers and clinicians. Issues reported by the coordinating center, participating sites, and study subjects were those faced when integrating new technologies into existing clinical trial systems. We then defined elements of a PROMIS Tool Kit required for integrating PROMIS into a multi-center clinical trial environment. The requirements identified in this study serve as a framework for future investigators in the design, development, implementation, and operation of PROMIS Tool Kit technologies.


International Journal of Functional Informatics and Personalised Medicine | 2010

Standardising clinical data elements

Meredith Nahm; Anita Walden; Brian McCourt; Karen S. Pieper; Emily Honeycutt; Carol D. Hamilton; Robert A. Harrington; Jane Diefenbach; Bron Kisler; Mead Walker; W. Ed Hammond

We report the development and implementation of a methodology for standardising clinical data elements. The methodology, piloted using Tuberculosis (TB) and Acute Coronary Syndromes (ACS) domains, relies on clinicians for natural language definitions and on informaticists for computable specifications. Data elements are represented using the ISO 11179 standard, UML class, and activity diagrams. Over 2000 candidate data elements were compiled for each domain. Initial sets of 21 data elements for ACS and 139 for TB, plus 300 valid values, were standardised and made publicly available. The methodology is now used in HL7 for data element definition in other clinical areas.


Clinical Trials | 2011

Design and implementation of an institutional case report form library

Meredith Nahm; John Shepherd; Ann Buzenberg; Reza Rostami; Andrew Corcoran; Jonathan McCall; Ricardo Pietrobon

Background Case report forms (CRFs) are used to collect data in clinical research. Case report form development represents a significant part of the clinical trial process and can affect study success. Libraries of CRFs can preserve the organizational knowledge and expertise invested in CRF development and expedite the sharing of such knowledge. Although CRF libraries have been advocated, there have been no published accounts reporting institutional experiences with creating and using them. Purpose We sought to enhance an existing institutional CRF library by improving information indexing and accessibility. We describe this CRF library and discuss challenges encountered in its development and implementation, as well as future directions for continued work in this area. Methods We transformed an existing but underused and poorly accessible CRF library into a resource capable of supporting and expediting clinical and translational investigation at our institution by (1) expanding access to the entire institution; (2) adding more form attributes for improved information retrieval; and (3) creating a formal information curation and maintenance process. An open-source content management system, Plone (Plone.org), served as the platform for our CRF library. Results We report results from these three processes. Over the course of this project, the size of the CRF library increased from 160 CRFs comprising an estimated total of 17,000 pages, to 177 CRFs totaling 1.5 gigabytes. Eighty-two of these CRFs are now available to researchers across our institution; 95 CRFs remain within a contractual confidentiality window (usually 5 years from database lock) and are not available to users outside of the Duke Clinical Research Institute (DCRI). Conservative estimates suggest that the library supports an average of 37 investigators per month. The resources needed to curate and maintain the CRF library require less than 10% of the effort of one full-time equivalent employee. Limitations Although we succeeded in expanding use of the CRF library, creating awareness of such institutional resources among investigators and research teams remains challenging and requires additional efforts to overcome. Institutions that have not achieved a critical mass of attractive research resources or effective dissemination mechanisms may encounter persistent difficulty attracting researchers to use institutional resources. Further, a useful CRF library requires both an initial investment of resources for development, as well as ongoing maintenance once it is established. Conclusions CRF libraries can be established and made broadly available to institutional researchers. Curation – that is, indexing newly added forms – is required. Such a resource provides knowledge management capacity for institutions until standards and software are available to support widespread exchange of data and form definitions.


Clinical Trials | 2009

A centralized informatics infrastructure for the National Institute on Drug Abuse Clinical Trials Network.

Jeng-Jong Pan; Meredith Nahm; Paul Wakim; Carol Cushing; Lori Poole; Betty Tai; Carl F. Pieper

Background Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. Purpose In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the networks trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. Methods During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. Results We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%—1.0% before centralization to 0.01—0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. Limitations A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. Conclusions The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality. Clinical Trials 2009; 6: 67—75. http://ctj.sagepub.com

Collaboration


Dive into the Meredith Nahm's collaboration.

Top Co-Authors

Avatar

Ida Sim

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiajie Zhang

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge