Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William K. Barnett is active.

Publication


Featured researches published by William K. Barnett.


Alcohol | 2010

Collaborative initiative on fetal alcohol spectrum disorders: methodology of clinical projects

Sarah N. Mattson; Tatiana Foroud; Elizabeth R. Sowell; Kenneth Lyons Jones; Claire D. Coles; Åse Fagerlund; Ilona Autti-Rämö; Philip A. May; Colleen M. Adnams; Valentina Konovalova; Leah Wetherill; Andrew Arenson; William K. Barnett; Edward P. Riley

The Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CIFASD) was created in 2003 to further understanding of fetal alcohol spectrum disorders. Clinical and basic science projects collect data across multiple sites using standardized methodology. This article describes the methodology being used by the clinical projects that pertain to assessment of children and adolescents. Domains being addressed are dysmorphology, neurobehavior, 3-D facial imaging, and brain imaging.


Journal of the American Medical Informatics Association | 2011

Direct2Experts: A pilot national network to demonstrate interoperability among research-networking platforms

Griffin M. Weber; William K. Barnett; Michael Conlon; David Eichmann; Warren A. Kibbe; Holly J. Falk-Krzesinski; Michael Halaas; Layne M. Johnson; Eric Meeks; Donald M. Mitchell; Titus Schleyer; Sarah Stallings; Michael Warden; Maninder Kahlon

Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the networks functionality and focus on the end users.


Alcohol | 2010

Implementation of a shared data repository and common data dictionary for fetal alcohol spectrum disorders research.

Andrew Arenson; Ludmila N. Bakhireva; Christina D. Chambers; Christina Deximo; Tatiana Foroud; Joseph L. Jacobson; Sandra W. Jacobson; Kenneth Lyons Jones; Sarah N. Mattson; Philip A. May; Elizabeth S. Moore; Kimberly Ogle; Edward P. Riley; Luther K. Robinson; Jeffrey Rogers; Ann P. Streissguth; Michel Tavares; Joseph Urbanski; Yelena Yezerets; Radha Surya; Craig A. Stewart; William K. Barnett

Many previous attempts by fetal alcohol spectrum disorders researchers to compare data across multiple prospective and retrospective human studies have failed because of both structural differences in the collected data and difficulty in coming to agreement on the precise meaning of the terminology used to describe the collected data. Although some groups of researchers have an established track record of successfully integrating data, attempts to integrate data more broadly among different groups of researchers have generally faltered. Lack of tools to help researchers share and integrate data has also hampered data analysis. This situation has delayed improving diagnosis, intervention, and treatment before and after birth. We worked with various researchers and research programs in the Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CI-FASD) to develop a set of common data dictionaries to describe the data to be collected, including definitions of terms and specification of allowable values. The resulting data dictionaries were the basis for creating a central data repository (CI-FASD Central Repository) and software tools to input and query data. Data entry restrictions ensure that only data that conform to the data dictionaries reach the CI-FASD Central Repository. The result is an effective system for centralized and unified management of the data collected and analyzed by the initiative, including a secure, long-term data repository. CI-FASD researchers are able to integrate and analyze data of different types, using multiple methods, and collected from multiple populations, and data are retained for future reuse in a secure, robust repository.


Journal of the American Medical Informatics Association | 2014

Leveraging the national cyberinfrastructure for biomedical research.

Richard D. LeDuc; Matthew W. Vaughn; John M. Fonner; Michael Sullivan; James G. Williams; Philip D. Blood; James Taylor; William K. Barnett

In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the ‘Big Data’ challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community.


Human Genomics | 2012

Collaborative software for traditional and translational research

Ari E. Berman; William K. Barnett; Sean D. Mooney

Biomedical research has entered a period of renewed vigor with the introduction and rapid development of genomic technologies and next-generation sequencing methods. This research paradigm produces extremely large datasets that are both difficult to store and challenging to mine for relevant data. Additionally, the thorough exploration of such datasets requires more resources, personnel, and multidisciplinary expertise to properly analyze and interpret the data. As a result, modern biomedical research practices are increasingly designed to include multi-laboratory collaborations that effectively distribute the scientific workload and expand the pool of expertise within a project. The scope of biomedical research is further complicated by increased efforts in translational research, which mandates the translation of basic laboratory research results into the human medical application space, adding to the complexity of potential collaborations. This increase in multidisciplinary, multi-laboratory, and biomedical translational research identifies a specific need for formalized collaboration practices and software applications that support such efforts. Here, we describe formal technological requirements for such efforts and we review several software solutions that can effectively improve the organization, communication, and formalization of collaborations in biomedical research today.


Journal of the American Medical Informatics Association | 2016

The Medical Science DMZ

Sean Peisert; William K. Barnett; Eli Dart; James Cuff; Robert L. Grossman; Edward Balas; Ari E. Berman; Anurag Shankar; Brian Tierney

Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.


Journal of the American Medical Informatics Association | 2018

The medical science DMZ: a network design pattern for data-intensive medical science

Sean Peisert; Eli Dart; William K. Barnett; Edward Balas; James Cuff; Robert L. Grossman; Ari E. Berman; Anurag Shankar; Brian Tierney

Abstract Objective We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet-filter firewalls, network intrusion-detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. Discussion The exponentially increasing amounts of “omics” data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research “Big Data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.


Standards in Genomic Sciences | 2017

Report from the 2016 CrossConnects workshop: improving data mobility & management for bioinformatics

Kathryn Petersen Mace; Dan Jacobson; Brooklin Gore; Lauren Rotman; Jennifer Schopf; Mary Hester; Predrag Radulovic; William K. Barnett

Due to significant declines in the price of genome sequencing technology, the bioinformatics sciences are experiencing a massive upswing in data generation resulting in an increasing need for data distribution and access. The sheer number of biological areas of study, many of which benefit from the scientific breakthroughs of one another, are adding to the increase of shared data usage. The need for effective data management, analysis, and access are becoming more critical. While there are commonalities facing both precision medicine and metagenomics, each area has its own unique challenges and needs. A workshop was held in April 2016 at Lawrence Berkeley National Laboratory that brought together scientists from both fields, along with experts in computing and networking. Presenters and attendees discussed current research and pressing data issues facing the bioinformatics field today and in the near future.


Proceedings of the 2015 XSEDE Conference on Scientific Advancements Enabled by Enhanced Cyberinfrastructure | 2015

Cyberinfrastructure resources enabling creation of the loblolly pine reference transcriptome

Le-Shin Wu; Carrie L. Ganote; Thomas G. Doak; William K. Barnett; Keithanne Mockaitis; Craig A. Stewart

Todays genomics technologies generate more sequence data than ever before possible, and at substantially lower costs, serving researchers across biological disciplines in transformative ways. Building transcriptome assemblies from RNA sequencing reads is one application of next-generation sequencing (NGS) that has held a central role in biological discovery in both model and non-model organisms, with and without whole genome sequence references. A major limitation in effective building of transcriptome references is no longer the sequencing data generation itself, but the computing infrastructure and expertise needed to assemble, analyze and manage the data. Here we describe a currently available resource dedicated to achieving such goals, and its use for extensive RNA assembly of up to 1.3 billion reads representing the massive transcriptome of loblolly pine, using four major assembly software installations. The Mason cluster, an XSEDE second tier resource at Indiana University, provides the necessary fast CPU cycles, large memory, and high I/O throughput for conducting large-scale genomics research. The National Center for Genome Analysis Support, or NCGAS, provides technical support in using HPC systems, bioinformatic support for determining the appropriate method to analyze a given dataset, and practical assistance in running computations. We demonstrate that a sufficient supercomputing resource and good workflow design are elements that are essential to large eukaryotic genomics and transcriptomics projects such as the complex transcriptome of loblolly pine, gene expression data that inform annotation and functional interpretation of the largest genome sequence reference to date.


Proceedings of the 1st Workshop on The Science of Cyberinfrastructure | 2015

Sustained Software for Cyberinfrastructure: Analyses of Successful Efforts with a Focus on NSF-funded Software

Craig A. Stewart; William K. Barnett; Eric A. Wernert; Julie Wernert; Von Welch; Richard Knepper

Reliable software that provides needed functionality is clearly essential for an effective distributed cyberinfrastructure (CI) that supports comprehensive, balanced, and flexible distributed CI. Effective distributed cyberinfrastructure, in turn, supports science and engineering applications. The purpose of this study was to understand what factors lead to software projects being well sustained over the long run, focusing on software created with funding from the US National Science Foundation (NSF) and/or used by researchers funded by the NSF. We surveyed NSF-funded researchers and performed in-depth studies of software projects that have been sustained over many years. Successful projects generally used open-source software licenses and employed good software engineering practices and test practices. However, many projects that have not been well sustained over time also met these criteria. The features that stood out about successful projects included deeply committed leadership and some sort of user forum or conference at least annually. In some cases, software project leaders have employed multiple financial strategies over the course of a decades-old software project. Such well-sustained software is used in major distributed CI projects that support thousands of users, and this software is critical to the operation of major distributed CI facilities in the US. The findings of our study identify some characteristics of software that is relevant to the NSF-supported research community, and that has been sustained over many years.

Collaboration


Dive into the William K. Barnett's collaboration.

Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas G. Doak

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Von Welch

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Le-Shin Wu

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew R. Link

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Anurag Shankar

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Ari E. Berman

Buck Institute for Research on Aging

View shared research outputs
Researchain Logo
Decentralizing Knowledge