Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thea Norman is active.

Publication


Featured researches published by Thea Norman.


Nature Biotechnology | 2003

Integrating transcriptional and metabolite profiles to direct the engineering of lovastatin-producing fungal strains.

Manor Askenazi; Edward M. Driggers; Douglas Holtzman; Thea Norman; Sara Iverson; Daniel P. Zimmer; Mary-Ellen Boers; Paul Blomquist; Eduardo J. Martinez; Alex W. Monreal; Toby P. Feibelman; Maria Mayorga; Mary Maxon; Kristie Sykes; Jenny Tobin; Etchell A. Cordero; Sofie R. Salama; Joshua Trueheart; John C. Royer; Kevin T. Madden

We describe a method to decipher the complex inter-relationships between metabolite production trends and gene expression events, and show how information gleaned from such studies can be applied to yield improved production strains. Genomic fragment microarrays were constructed for the Aspergillus terreus genome, and transcriptional profiles were generated from strains engineered to produce varying amounts of the medically significant natural product lovastatin. Metabolite detection methods were employed to quantify the polyketide-derived secondary metabolites lovastatin and (+)-geodin in broths from fermentations of the same strains. Association analysis of the resulting transcriptional and metabolic data sets provides mechanistic insight into the genetic and physiological control of lovastatin and (+)-geodin biosynthesis, and identifies novel components involved in the production of (+)-geodin, as well as other secondary metabolites. Furthermore, this analysis identifies specific tools, including promoters for reporter-based selection systems, that we employed to improve lovastatin production by A. terreus.


Nature Methods | 2015

Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection

Adam D. Ewing; Kathleen E. Houlahan; Yin Hu; Kyle Ellrott; Cristian Caloian; Takafumi N. Yamaguchi; J Christopher Bare; Christine P'ng; Daryl Waggott; Veronica Y. Sabelnykova; Michael R. Kellen; Thea Norman; David Haussler; Stephen H. Friend; Gustavo Stolovitzky; Adam A. Margolin; Joshua M. Stuart; Paul C. Boutros

The detection of somatic mutations from cancer genome sequences is key to understanding the genetic basis of disease progression, patient survival and response to therapy. Benchmarking is needed for tool assessment and improvement but is complicated by a lack of gold standards, by extensive resource requirements and by difficulties in sharing personal genomic information. To resolve these issues, we launched the ICGC-TCGA DREAM Somatic Mutation Calling Challenge, a crowdsourced benchmark of somatic mutation detection algorithms. Here we report the BAMSurgeon tool for simulating cancer genomes and the results of 248 analyses of three in silico tumors created with it. Different algorithms exhibit characteristic error profiles, and, intriguingly, false positives show a trinucleotide profile very similar to one found in human tumors. Although the three simulated tumors differ in sequence contamination (deviation from normal cell sequence) and in subclonality, an ensemble of pipelines outperforms the best individual pipeline in all cases. BAMSurgeon is available at https://github.com/adamewing/bamsurgeon/.


Science Translational Medicine | 2013

Systematic Analysis of Challenge-Driven Improvements in Molecular Prognostic Models for Breast Cancer

Adam A. Margolin; Erhan Bilal; Erich Huang; Thea Norman; Lars Ottestad; Brigham Mecham; Ben Sauerwine; Michael R. Kellen; Lara M. Mangravite; Matthew D. Furia; Hans Kristian Moen Vollan; Oscar M. Rueda; Justin Guinney; Nicole A. Deflaux; Bruce Hoff; Xavier Schildwachter; Hege G. Russnes; Daehoon Park; Veronica O. Vang; Tyler Pirtle; Lamia Youseff; Craig Citro; Christina Curtis; Vessela N. Kristensen; Joseph L. Hellerstein; Stephen H. Friend; Gustavo Stolovitzky; Samuel Aparicio; Carlos Caldas; Anne Lise Børresen-Dale

An open challenge to model breast cancer prognosis revealed that collaboration and transparency enhanced the power of prognostic models. DREAMing of Biomedicine’s Future Although they no longer live in the lab, scientific editors still enjoy doing experiments. The simultaneous publication of two unusual papers offered Science Translational Medicine’s editors the chance to conduct an investigation into peer-review processes for competition-based crowdsourcing studies designed to address problems in biomedicine. In a Report by Margolin et al. (which was peer-reviewed in the traditional way), organizers of the Sage Bionetworks/DREAM Breast Cancer Prognosis Challenge (BCC) describe the contest’s conception, execution, and insights derived from its outcome. In the companion Research Article, Cheng et al. outline the development of the prognostic computational model that won the Challenge. In this experiment in scientific publishing, the rigor of the Challenge design and scoring process formed the basis for a new style of publication peer review. DREAM—Dialogue for Reverse Engineering Assessments and Methods—conducts a variety of computational Challenges with the goal of catalyzing the “interaction between theory and experiment, specifically in the area of cellular network inference and quantitative model building in systems biology.” Previous Challenges involved, for example, modeling of protein-protein interactions for binding domains and peptides and the specificity of transcription factor binding. In the BCC—which was a step in the translational direction—participants competed to create an algorithm that could predict, more accurately than current benchmarks, the prognosis of breast cancer patients from clinical information (age, tumor size, histological grade), genome-scale tumor mRNA expression data, and DNA copy number data. Participants were given Web access to such data for 1981 women diagnosed with breast cancer and used it to train computational models that were then submitted to a common, open-access computational platform as re-runnable source code. The predictive value of each model was assessed in real-time by calculating a concordance index (CI) of predicted death risks compared to overall survival in a held-out data set, and CIs were posted on a public leaderboard. The winner of the Challenge was ultimately determined when a select group of top models were validated in a new breast cancer data set. The winning model, described by Cheng et al., was based on sets of genes (signatures)—called attractor metagenes—that the same research group had previously shown to be associated, in various ways, with multiple cancer types. Starting with these gene sets and some other clinical and molecular features, the team modeled various feature combinations, selecting ones that improved performance of their prognostic model until they ultimately fashioned the winning algorithm. Before the BCC was initiated, Challenge organizers approached Science Translational Medicine about the possibility of publishing a Research Article that described the winning model. The Challenge prize would be a scholarly publication—a form of “academic currency.” The editors pondered whether winning the Challenge, with its built-in transparency and check on model reproducibility, would be sufficient evidence in support of the model’s validity to substitute for traditional peer review. Because the specific conditions of a Challenge are critical in determining the meaningfulness of the outcome, the editors felt it was not. Thus, they chose to arrange for peer-reviewers, chosen by the editors, to be embedded within the challenge process, as members of the organizing team—a so-called Challenge-assisted review. The editors also helped to develop criteria for determining the winning model, and if the criteria were not met, there would have been no winner—and no publication. Last, the manuscript was subjected to advisory peer review after it was submitted to the journal. So what new knowledge was gained about reviewing an article in which the result is an active piece of software? Reviewing such a model required that referees have access to the data and platform used for the Challenge and have the ability to re-run each participant’s code; in the context of the BCC, this requirement was easily achievable, because Challenge-partner Sage Bionetworks had created a platform (Synapse) with this goal in mind. In fact, both the training and validation datasets for the BCC are available to readers via links into Synapse (for a six month period of time). In general, this requirement should not be an obstacle, as there are code-hosting sites such as GitHub and TopCoder.com that can accommodate data sharing. Mechanisms for confidentiality would need to be built into any computational platform to be used for peer review. Finally, because different conventions are used in divergent scientific fields, communicating the science to an interdisciplinary audience is not a trivial endeavor. The architecture of the Challenge itself is critical in determining the real-world importance of the result. The question to be investigated must be framed so as to capture a significant outcome. In the BCC, participants’ models had to score better than a set of 60 different prognostic models developed by a team of expert programmers during a Challenge precompetition as well as a previously described first-generation 70-gene risk predictor. Thus, the result may or may not be superior to existing gene expression profiling tests used in clinical practice. This remains to be tested. It also remains to be seen whether prize-based crowdsourcing contests can make varied and practical contributions in the clinic. Indeed, DREAM and Sage Bionetworks have immediate plans to collaborate on new clinically relevant Challenges. But there is no doubt that the approach has value in solving big-data problems. For example, in a recent contest, non-immunologists generated a method for annotating the complex genome sequence of the antibody repertoire when the contest organizers translated the problem into generic language. In the BCC, the Challenge winners used a mathematical approach to identify biological modules that might, with continued investigation, teach us something about cancer biology. These examples support the notion that harnessing the expertise of contestants outside of traditional biological disciplines may be a powerful way to accelerate the translation of biomedical science to the clinic. Although molecular prognostics in breast cancer are among the most successful examples of translating genomic analysis to clinical applications, optimal approaches to breast cancer clinical risk prediction remain controversial. The Sage Bionetworks–DREAM Breast Cancer Prognosis Challenge (BCC) is a crowdsourced research study for breast cancer prognostic modeling using genome-scale data. The BCC provided a community of data analysts with a common platform for data access and blinded evaluation of model accuracy in predicting breast cancer survival on the basis of gene expression data, copy number data, and clinical covariates. This approach offered the opportunity to assess whether a crowdsourced community Challenge would generate models of breast cancer prognosis commensurate with or exceeding current best-in-class approaches. The BCC comprised multiple rounds of blinded evaluations on held-out portions of data on 1981 patients, resulting in more than 1400 models submitted as open source code. Participants then retrained their models on the full data set of 1981 samples and submitted up to five models for validation in a newly generated data set of 184 breast cancer patients. Analysis of the BCC results suggests that the best-performing modeling strategy outperformed previously reported methods in blinded evaluations; model performance was consistent across several independent evaluations; and aggregating community-developed models achieved performance on par with the best-performing individual models.


Nature Methods | 2016

Inferring causal molecular networks: empirical assessment through a community-based effort

Steven M. Hill; Laura M. Heiser; Thomas Cokelaer; Michael Unger; Nicole K. Nesser; Daniel E. Carlin; Yang Zhang; Artem Sokolov; Evan O. Paull; Christopher K. Wong; Kiley Graim; Adrian Bivol; Haizhou Wang; Fan Zhu; Bahman Afsari; Ludmila Danilova; Alexander V. Favorov; Wai Shing Lee; Dane Taylor; Chenyue W. Hu; Byron L. Long; David P. Noren; Alexander J Bisberg; Gordon B. Mills; Joe W. Gray; Michael R. Kellen; Thea Norman; Stephen H. Friend; Amina A. Qutub; Elana J. Fertig

It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense.


Science Translational Medicine | 2011

Leveraging Crowdsourcing to Facilitate the Discovery of New Medicines

Thea Norman; C. Bountra; A. Edwards; Keith R. Yamamoto; Stephen H. Friend

Empowering the collective brain trust to perform drug discovery in an open access environment may de-risk an inherently tricky endeavor. Visions and Revisions Like the risk-averse namesake in the poem The Love Song of J. Alfred Prufrock, the pharmaceutical industry may be poised to see “the moment of [its] greatness flicker” if it doesn’t “dare disturb the universe.” This wake-up call has spurred scientists, policy makers, foundations, and funders to devise innovative models of drug discovery. The nascent public-private partnership (PPP) Arch2POCM aims to advance drug development through the validation of pioneer therapeutic targets for human diseases in an open-access environment void of intellectual property (IP). At Arch2POCM’s recent meeting in April 2011, participants experienced a “eureka” moment—that crowdsourcing of their IP-free findings and reagents has the potential to provide clinical information about the pioneer targets in many indications and thereby mitigate some of the risk associated with therapeutics discovery and development. Here, the authors relate how Arch2POCM hopes to harness progressive minds worldwide to reinvent the drug discovery progress—and eventually transform clinical medicine. Gloomy predictions about the future of pharma have forced the industry to investigate alternative models of drug discovery. Public-private partnerships (PPPs) have the potential to revitalize the discovery and development of first-in-class therapeutics. The new PPP Arch2POCM hopes to foster biomedical innovation through precompetitive validation of pioneer therapeutic targets for human diseases. In this meeting report, we capture insights garnered from the April 2011 Arch2POCM conference.


Nature Reviews Genetics | 2016

Crowdsourcing biomedical research: leveraging communities as innovation engines

Julio Saez-Rodriguez; James C. Costello; Stephen H. Friend; Michael R. Kellen; Lara M. Mangravite; Pablo Meyer; Thea Norman; Gustavo Stolovitzky

The generation of large-scale biomedical data is creating unprecedented opportunities for basic and translational science. Typically, the data producers perform initial analyses, but it is very likely that the most informative methods may reside with other groups. Crowdsourcing the analysis of complex and massive data has emerged as a framework to find robust methodologies. When the crowdsourcing is done in the form of collaborative scientific competitions, known as Challenges, the validation of the methods is inherently addressed. Challenges also encourage open innovation, create collaborative communities to solve diverse and important biomedical problems, and foster the creation and dissemination of well-curated data repositories.


Alzheimers & Dementia | 2016

Crowdsourced estimation of cognitive decline and resilience in Alzheimer's disease

Genevera I. Allen; Nicola Amoroso; Catalina V Anghel; Venkat K. Balagurusamy; Christopher Bare; Derek Beaton; Roberto Bellotti; David A. Bennett; Kevin L. Boehme; Paul C. Boutros; Laura Caberlotto; Cristian Caloian; Frederick Campbell; Elias Chaibub Neto; Yu Chuan Chang; Beibei Chen; Chien Yu Chen; Ting Ying Chien; Timothy W.I. Clark; Sudeshna Das; Christos Davatzikos; Jieyao Deng; Donna N. Dillenberger; Richard Dobson; Qilin Dong; Jimit Doshi; Denise Duma; Rosangela Errico; Guray Erus; Evan Everett

Identifying accurate biomarkers of cognitive decline is essential for advancing early diagnosis and prevention therapies in Alzheimers disease. The Alzheimers disease DREAM Challenge was designed as a computational crowdsourced project to benchmark the current state‐of‐the‐art in predicting cognitive outcomes in Alzheimers disease based on high dimensional, publicly available genetic and structural imaging data. This meta‐analysis failed to identify a meaningful predictor developed from either data modality, suggesting that alternate approaches should be considered for prediction of cognitive performance.


Nature Genetics | 2014

Global optimization of somatic variant identification in cancer genomes with a global community challenge

Paul C. Boutros; Adam D. Ewing; Kyle Ellrott; Thea Norman; Kristen Dang; Yin Hu; Michael R. Kellen; Christine Suver; J Christopher Bare; Lincoln Stein; Paul T. Spellman; Gustavo Stolovitzky; Stephen H. Friend; Adam A. Margolin; Joshua M. Stuart

Global optimization of somatic variant identification in cancer genomes with a global community challenge


Nature Biotechnology | 2013

Metcalfe's law and the biology information commons

Stephen H. Friend; Thea Norman

Open collaboration on biomedical discoveries requires a fundamental shift in the traditional roles and rewards for both investigators and participants in research.


Science Translational Medicine | 2011

The Precompetitive Space: Time to Move the Yardsticks

Thea Norman; A. Edwards; C. Bountra; Stephen H. Friend

A recent meeting of minds set into motion an open-access initiative designed to achieve proof of clinical mechanism for selected disease targets. Industry, government, patient advocacy groups, public funders, and academic thought leaders met in Toronto, Canada, to set into motion an initiative that addresses some of the scientific and organizational challenges of modern therapeutics discovery. What emerged from the meeting was a public-private partnership that seeks to establish proof of clinical mechanism (POCM) for selected “pioneer” disease targets using lead compounds—all accomplished in the precompetitive space. The group will reconvene in April 2011 to create a business plan that specifies the generation of two positive POCM results per year.

Collaboration


Dive into the Thea Norman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark G. Currie

Ironwood Pharmaceuticals

View shared research outputs
Top Co-Authors

Avatar

G. Todd Milne

Ironwood Pharmaceuticals

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Wang

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Paul C. Boutros

Ontario Institute for Cancer Research

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge