Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mildred K. Cho is active.

Publication


Featured researches published by Mildred K. Cho.


Journal of Law Medicine & Ethics | 2008

Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations

Susan M. Wolf; Frances Lawrenz; Charles A. Nelson; Jeffrey P. Kahn; Mildred K. Cho; Ellen Wright Clayton; Joel G. Fletcher; Michael K. Georgieff; Dale E. Hammerschmidt; Kathy Hudson; Judy Illes; Vivek Kapur; Moira A. Keane; Barbara A. Koenig; Bonnie S. LeRoy; Elizabeth G. McFarland; Jordan Paradise; Lisa S. Parker; Sharon F. Terry; Brian Van Ness; Benjamin S. Wilfond

No consensus yet exists on how to handle incidental findings (IFs) in human subjects research. Yet empirical studies document IFs in a wide range of research studies, where IFs are findings beyond the aims of the study that are of potential health or reproductive importance to the individual research participant. This paper reports recommendations of a two-year project group funded by NIH to study how to manage IFs in genetic and genomic research, as well as imaging research. We conclude that researchers have an obligation to address the possibility of discovering IFs in their protocol and communications with the IRB, and in their consent forms and communications with research participants. Researchers should establish a pathway for handling IFs and communicate that to the IRB and research participants. We recommend a pathway and categorize IFs into those that must be disclosed to research participants, those that may be disclosed, and those that should not be disclosed.


Pharmacogenomics Journal | 2001

Integrating genotype and phenotype information : an overview of the pharmGKB project

Teri E. Klein; Jeffrey T. Chang; Mildred K. Cho; K L Easton; R Fergerson; Micheal Hewett; Zhen Lin; Yueyi Liu; Shuo Liu; Diane E. Oliver; Daniel L. Rubin; F Shafa; Joshua M. Stuart; Russ B. Altman

Pharmacogenetics seeks to explain how people respond in different ways to the same drug treatment. A classic example of the importance of pharmacogenomics is the variation in individual responses to the anti-leukemia drug, 6-mercaptopurine. Most people metabolize the drug quickly. Some individuals, with a genetic variation for the enzyme thiopurine methyltransferase (TPMT),1 do not. Consequently, they need lower doses of 6-mercaptopurine for effective treatment as normal doses can be lethal. One of the many promises of the human genome project is an ability to pharmacologically treat individuals in a more personalized rather than statistical manner.


Genetics in Medicine | 2012

Managing Incidental Findings and Research Results in Genomic Research Involving Biobanks and Archived Data Sets

Susan M. Wolf; Brittney Crock; Brian Van Ness; Frances Lawrenz; Jeffrey P. Kahn; Laura M. Beskow; Mildred K. Cho; Michael F. Christman; Robert C. Green; Ralph Hall; Judy Illes; Moira A. Keane; Bartha Maria Knoppers; Barbara A. Koenig; Isaac S. Kohane; Bonnie S. LeRoy; Karen J. Maschke; William McGeveran; Pilar N. Ossorio; Lisa S. Parker; Gloria M. Petersen; Henry S. Richardson; Joan Scott; Sharon F. Terry; Benjamin S. Wilfond; Wendy A. Wolf

Biobanks and archived data sets collecting samples and data have become crucial engines of genetic and genomic research. Unresolved, however, is what responsibilities biobanks should shoulder to manage incidental findings and individual research results of potential health, reproductive, or personal importance to individual contributors (using “biobank” here to refer both to collections of samples and collections of data). This article reports recommendations from a 2-year project funded by the National Institutes of Health. We analyze the responsibilities involved in managing the return of incidental findings and individual research results in a biobank research system (primary research or collection sites, the biobank itself, and secondary research sites). We suggest that biobanks shoulder significant responsibility for seeing that the biobank research system addresses the return question explicitly. When reidentification of individual contributors is possible, the biobank should work to enable the biobank research system to discharge four core responsibilities to (1) clarify the criteria for evaluating findings and the roster of returnable findings, (2) analyze a particular finding in relation to this, (3) reidentify the individual contributor, and (4) recontact the contributor to offer the finding. We suggest that findings that are analytically valid, reveal an established and substantial risk of a serious health condition, and are clinically actionable should generally be offered to consenting contributors. This article specifies 10 concrete recommendations, addressing new biobanks as well as those already in existence.Genet Med 2012:14(4):361–384


Annals of Internal Medicine | 1996

The Quality of Drug Studies Published in Symposium Proceedings

Mildred K. Cho; Lisa Bero

For physicians, pharmacists, pharmacologists, and others, the medical literature is a key source of information about prescription drugs [1, 2]. The medical literature on drugs includes articles from peer-reviewed journals, non-peer-reviewed (controlled circulation or throwaway) journals, and the published proceedings of symposia [3, 4]. Symposia are a rapidly growing and potentially major means of disseminating information about drugs. In the clinical journals with the highest circulation rates, the number of symposia published increased from 83 during 1972-1977 to 307 during 1984-1989. Approximately half of these symposia were on pharmaceutical topics [4]. Symposia can be valuable sources of information about drugs, but evidence suggests that they can also be used to market drugs and other interventions, especially if they are industry sponsored. Approximately 70% of symposia on pharmaceutical topics are sponsored by drug companies [3, 4]. Among symposia, sponsorship by a single drug company is associated with promotional characteristics that include a focus on a single drug, misleading titles, use of brand names, and lack of peer review [4]. Other studies indicate that clinical trials, including those published in symposia, are more likely to favor a new drug therapy if they are funded by the pharmaceutical industry than if they are not [5, 6]. Although physicians often report that the peer-reviewed literature is one of their main sources of drug information, industry sources of information can sometimes have a stronger influence on prescribing behavior [2]. Thus, if symposia sponsored by drug companies are a growing source of information about drugs for pharmacists and physicians, assessing the quality of the articles in these symposia is important. We compared the methodologic quality and relevance of drug studies published in symposia sponsored by single drug companies with those of studies that were published in symposia that had other sponsors or in the peer-reviewed parent journals. We also assessed whether a methods section was present, because such a section is necessary for evaluating quality. Finally, we tested whether drug industry support of research was associated with study outcome. Methods A symposium is a collection of papers published as a separate issue or as a special section in a regular issue of a medical journal [4]. We defined original clinical drug articles as articles that 1) appeared to present original data from studies done in humans [that is, articles that had at least one table or figure that was not acknowledged to have been reprinted from another source] and 2) did not specifically state that they were reviews [4]. Selection of Articles We identified original clinical drug articles that had a section describing the study methods, because such a section is needed to assess the quality of an article. Using a computer-generated list of random numbers from 1 to 625, we randomly selected symposia from 625 symposia that had been identified for a previous study [4]. We had data on the type of sponsorship of publication for each symposium. From each selected symposium, we randomly selected one original clinical drug article that had a methods section. We continued selecting symposia until we had enough articles (n = 127) according to the sample size estimates described below. We also calculated the proportion of articles in the selected symposia, overall and by type of sponsorship, that had methods sections. Quality Assessment We compared the quality of original clinical drug articles published in symposia sponsored by single drug companies with that of similar articles published in symposia that had other sponsors and in the peer-reviewed parent journals. Sample Size Estimates We estimated the sample size needed to test the association between the independent variable type of sponsorship of publication and the main outcome measure, methodologic quality score. For a three-group comparison, a minimum sample of 108 symposium articles was needed to detect a minimum effect size of 0.10 (on a scale of 0 to 1), with an value of 0.05 and a value of 0.80, and standard deviation of quality scores of 0.18 based on previous results [7]. To compare articles from symposia sponsored by single pharmaceutical companies with articles from the peer-reviewed parent journals, we estimated that we would need 45 symposia articles and 45 journal articles; this estimate was the result of sample size calculations done using the variables described above. Because date of publication, journal, and therapeutic class of drug could have confounded the association between source of publication and quality [8-10], we matched each symposium article to an article from the parent journal by using these characteristics, as described previously [7]. Our sample of symposium articles contained 50 articles sponsored by single drug companies, but 5 articles published in Transplantation Proceedings were excluded from this analysis because no parent journal is associated with that publication. Instruments We used previously developed instruments to measure the methodologic quality of articles (defined as the minimization of systematic bias and the consistency of conclusions with results) and nonmethodologic indices of quality, such as clinical relevance and generalizability. Both instruments were valid and reliable and have been published elsewhere [7]. Four reviewers independently assessed each article: Two used the methodologic quality instrument, and two used the clinical relevance instrument. We derived methodologic quality and clinical relevance scores for each article by using a previously described scoring system [7]. Each score was between 0 (lowest quality) and 1 (highest quality) and was the average of the scores of the two reviewers. Two clinical pharmacologists with extensive research experience in the health sciences did the methodologic quality assessment. For the clinical relevance instrument, three pairs of reviewers with clinical experience in general internal medicine and research experience in the health sciences each assessed one third of the articles. Each pair of reviewers reviewed the articles in the same randomized order. For both instruments, reviewers were trained as described previously [7]. For the quality assessments, each reviewer worked independently, was blinded as to whether an article had been published in a symposium, and was given photocopies of articles from which author names, institution names, journal names, dates, and all other reference information had been obliterated. Reviewers were unaware of our hypotheses and the purpose of their reviewing, and they were paid for their work. None of the reviewers were known to us or knew of our previous work before the study. We assessed the inter-rater reliability of quality scores by using the Kendall coefficient of concordance (W) with adjustment for tied ranks [11] and the intraclass correlation (R; treating both reviewers and articles as random effects [12]). Inter-rater reliability of quality scores was high (for methodologic quality scores: W equals 0.85, R equals 0.74 [95% CI, 0.67 to 0.80]; for clinical relevance scores: W equals 0.77, R equals 0.56 [CI, 0.44 to 0.65]). Drug Company Support and Study Outcome For each article, one of us determined whether a drug company had supported the research and whether the article 1) reported an outcome favorable to the drug of interest, 2) did not report an outcome favorable to the drug of interest, or 3) did not test a hypothesis. The drug of interest (as defined from the perspective of the authors, according to Gotzsche [13]) was the newest drug if two or more drugs were studied. We defined research as having had drug company support if the article that reported the research acknowledged either that a drug company had provided funding or drugs or that any of the authors were employed by a drug company. We determined drug company support solely on the basis of information in the paper. If an article did not test a hypothesis, it was excluded from this analysis. We classified the remaining articles as favorable or unfavorable using Gotzsches definitions [13]. An article was favorable if the drug that seemed to be of primary interest to the authors had the same effect as the comparison drug or drugs but with less pronounced side effects, had a better effect without more pronounced side effects, or was preferred more often by patients when the effect and side-effect evaluations were combined. All other articles were considered not favorable. The conclusions of the authors were taken at face value, even if they conflicted with the study results. To test inter-rater reliability, the other author independently assessed a subset of the articles (n = 90). Agreement in classifying articles as favorable or not favorable was 85%. Statistical Analyses Because methodologic quality and relevance scores were distributed normally (Shapiro-Wilk test), we analyzed differences between groups (type of sponsorship of publication) by using parametric one-way analysis of variance followed by the Tukey test for multiple comparisons or two-way analysis of variance (total error rate, 0.05). We compared matched groups (symposium articles and peer-reviewed parent journal articles) by using the paired t-test (two-tailed equals 0.05). To analyze categorical data on the outcome of studies, we tested for differences in proportions between groups by using the chi-square statistic. For tests of significance, we used an value of 0.05. All hypothesis tests were two-sided. Results Presence of a Method Section To obtain 127 original clinical drug articles for quality assessment, we had to select 213 symposia containing a total of 5041 articles. The proportions of articles that reported original data but contained no methods sections were 4% overall (195 of 5041), 10% (108 of 1064) in the symposia sponsored by single drug companies,


PLOS Biology | 2008

Research Ethics Recommendations for Whole-Genome Research: Consensus Statement

Timothy Caulfield; Amy L. McGuire; Mildred K. Cho; Janet A. Buchanan; Michael M. Burgess; Ursula Danilczyk; Christina M. Diaz; Kelly Fryer-Edwards; Shane K. Green; Marc A. Hodosh; Eric T. Juengst; Jane Kaye; Laurence H. Kedes; Bartha Maria Knoppers; Trudo Lemmens; Eric M. Meslin; Juli Murphy; Robert L. Nussbaum; Margaret Otlowski; Daryl Pullman; Peter N. Ray; Jeremy Sugarman; Michael Timmons

Interest in whole-genome research has grown substantially over the past few months. This article explores the challenging ethics issues associated with this work.


Nature | 2002

Diagnostic testing fails the test

Jon F. Merz; Antigone G. Kriss; Debra G. B. Leonard; Mildred K. Cho

The pitfalls of patents are illustrated by the case of haemochromatosis.


Prenatal Diagnosis | 2013

Commercial landscape of noninvasive prenatal testing in the United States

Ashwin Agarwal; Lauren C. Sayres; Mildred K. Cho; Robert Cook-Deegan; Subhashini Chandrasekharan

Cell‐free fetal DNA‐based noninvasive prenatal testing (NIPT) could significantly change the paradigm of prenatal testing and screening. Intellectual property (IP) and commercialization promise to be important components of the emerging debate about clinical implementation of these technologies. We have assembled information about types of testing, prices, turnaround times, and reimbursement of recently launched commercial tests in the United States from the trade press, news articles, and scientific, legal, and business publications. We also describe the patenting and licensing landscape of technologies underlying these tests and ongoing patent litigation in the United States. Finally, we discuss how IP issues may affect clinical translation of NIPT and their potential implications for stakeholders. Fetal medicine professionals (clinicians and researchers), genetic counselors, insurers, regulators, test developers, and patients may be able to use this information to make informed decisions about clinical implementation of current and emerging noninvasive prenatal tests.


Neurology | 2008

Practical approaches to incidental findings in brain imaging research.

Judy Illes; Matthew P. Kirschen; Emmeline Edwards; Peter A. Bandettini; Mildred K. Cho; Paul J. Ford; Gary H. Glover; Jennifer Kulynych; Ruth Macklin; Daniel B. Michael; Susan M. Wolf; Thomas J. Grabowski; B. Seto

A decade of empirical work in brain imaging, genomics, and other areas of research has yielded new knowledge about the frequency of incidental findings, investigator responsibility, and risks and benefits of disclosure. Straightforward guidance for handling such findings of possible clinical significance, however, has been elusive. In early work focusing on imaging studies of the brain, we suggested that investigators and institutional review boards must anticipate and articulate plans for handling incidental findings. Here we provide a detailed analysis of different approaches to the problem and evaluate their merits in the context of the goals and setting of the research and the involvement of neurologists, radiologists, and other physicians. Protecting subject welfare and privacy, as well as ensuring scientific integrity, are the highest priorities in making choices about how to handle incidental findings. Forethought and clarity will enable these goals without overburdening research conducted within or outside the medical setting.


American Journal of Bioethics | 2008

Strangers at the benchside: research ethics consultation.

Mildred K. Cho; Sara L. Tobin; Henry T. Greely; Jennifer B. McCormick; Angie Boyce; David Magnus

Institutional ethics consultation services for biomedical scientists have begun to proliferate, especially for clinical researchers. We discuss several models of ethics consultation and describe a team-based approach used at Stanford University in the context of these models. As research ethics consultation services expand, there are many unresolved questions that need to be addressed, including what the scope, composition, and purpose of such services should be, whether core competencies for consultants can and should be defined, and how conflicts of interest should be mitigated. We make preliminary recommendations for the structure and process of research ethics consultation, based on our initial experiences in a pilot program.


Prenatal Diagnosis | 2011

Cell‐free fetal DNA testing: a pilot study of obstetric healthcare provider attitudes toward clinical implementation

Lauren C. Sayres; Megan Allyse; Mary E. Norton; Mildred K. Cho

To provide a preliminary assessment of obstetric healthcare provider opinions surrounding implementation of cell‐free fetal DNA testing.

Collaboration


Dive into the Mildred K. Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon F. Merz

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pamela Sankar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Morgan Capron

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge