Christopher W. Belter
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher W. Belter.
PLOS ONE | 2014
Christopher W. Belter
Evaluation of scientific research is becoming increasingly reliant on publication-based bibliometric indicators, which may result in the devaluation of other scientific activities - such as data curation – that do not necessarily result in the production of scientific publications. This issue may undermine the movement to openly share and cite data sets in scientific publications because researchers are unlikely to devote the effort necessary to curate their research data if they are unlikely to receive credit for doing so. This analysis attempts to demonstrate the bibliometric impact of properly curated and openly accessible data sets by attempting to generate citation counts for three data sets archived at the National Oceanographic Data Center. My findings suggest that all three data sets are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years. I also find that methods of citing and referring to these data sets in scientific publications are highly inconsistent, despite the fact that a formal citation format is suggested for each data set. These findings have important implications for developing a data citation format, encouraging researchers to properly curate their research data, and evaluating the bibliometric impact of individuals and institutions.
Journal of The Medical Library Association | 2015
Christopher W. Belter
Evaluating scientific research has always been difficult. The peer-review process, which has been the mainstay of science evaluation for nearly a century, takes time, expertise, and no small amount of resources to do properly. But several trends in scientific research have made this process even more challenging. The sheer number of scientific publications produced per year has been growing at an exponential rate for over fifty years [1, 2] and has shown no sign of slowing down anytime soon. These publications are also growing increasingly technical and specialized, making qualified reviewers more and more difficult to find. Finally, the glut of researchers in the biomedical pipeline combined with the recent recession have resulted in a larger number of researchers competing for a shrinking pool of available research funds [3]. Evaluating scientific research in this context is becoming not only increasingly difficult, but also increasingly important to ensure that the right researchers receive promotions and funding to continue their work. In this environment, a number of review boards, institutions, and even countries are turning to bibliometrics to facilitate the review process. Bibliometrics is the quantitative analysis of publications. It essentially extracts data from publications and analyzes that data in various ways to answer questions about the research that those publications represent. It is a method of studying the producers, processes, and evolution of research using research publications as a proxy for research. As such, the field of bibliometrics encompasses a wide variety of approaches and methods, but it has become best known for its attempts to measure the impact of scientific research through the use of various bibliometric indicators like the impact factor [4] and the H-index [5]. Because these indicators are perceived to be more objective than peer review, because they can be calculated with far less time and effort than peer review, and because there is some evidence that these indicators agree with peer judgment, reviewers and policy makers are increasingly using these indicators in addition to, and in some cases in place of, peer review of research impact. Although the use of bibliometric indicators can provide a valuable supplement to the peer-review process, these indicators are all too often taken out of context and applied without a full understanding of the bibliometric research on which they are based. As a result, they are frequently used to measure things that they were not intended to measure or to make comparisons they are not actually capable of making. This article provides a short introduction to the basic ideas behind these indicators and discusses ways that they can be used responsibly to minimize the biases of peer review. For more extensive and technical overviews of this topic, see Haustein [6] and Mingers [7].
Scientometrics | 2013
Christopher W. Belter
Bibliometric analysis techniques are increasingly being used to analyze and evaluate scientific research produced by institutions and grant funding agencies. This article uses bibliometric methods to analyze journal articles funded by NOAA’s Office of Ocean Exploration and Research (OER), an extramural grant-funding agency focused on the scientific exploration of the world’s oceans. OER-supported articles in this analysis were identified through grant reports, personal communication, and acknowledgement of OER support or grant numbers. The articles identified were analyzed to determine the number of publications and citations received per year, subject, and institution. The productivity and citation impact of institutions in the US receiving OER grant funding were mapped geographically. Word co-occurrence and bibliographic coupling networks were created and visualized to identify the research topics of OER-supported articles. Finally, article citation counts were evaluated by means of percentile ranks. This article demonstrates that bibliometric analysis can be useful for summarizing and evaluating the research performance of a grant funding agency.
association for information science and technology | 2016
Christopher W. Belter
Systematic reviews are essential for evaluating biomedical treatment options, but the growing size and complexity of the available biomedical literature combined with the rigor of the systematic review method mean that systematic reviews are extremely difficult and labor‐intensive to perform. In this article, I propose a method of searching the literature by systematically mining the various types of citation relationships between articles. I then test the method by comparing its precision and recall to that of 14 published systematic reviews. The method successfully retrieved 74% of the studies included in these reviews and 90% of the studies it could reasonably be expected to retrieve. The method also retrieved fewer than half of the total number of publications retrieved by these reviews and can be performed in substantially less time. This suggests that the proposed method offers a promising complement to traditional text‐based methods of literature identification and retrieval for systematic reviews.
Information services & use | 2016
James King; Christopher W. Belter; Bridget Burns; MaShana Davis
The National Institutes of Health (NIH) Library meets the needs of the diverse NIH research community through a range of innovative services, resources, and knowledge. Based upon an understanding of the information industry and the mission and goals of NIH, the NIH Library offers a number of services that exploit data of various types to support assessment and create value. Understanding our users’ engagement with content (e.g., citations, behavioral data, research funding) allows us to provide personal and customized services including bibliometrics, collection assessment, and custom information solutions.
Scientometrics | 2017
Christopher W. Belter
A growing number of researchers are exploring the use of citation relationships such as direct citation, bibliographic coupling, and co-citation for information retrieval in scientific databases and digital libraries. In this paper, I propose a method of ranking the relevance of citation-based search results to a set of key, or seed, papers by measuring the number of citation relationships they share with those key papers. I tested the method against 23 published systematic reviews and found that the method retrieved 87% of the studies included in these reviews. The relevance ranking approach identified a subset of the citation search results that comprised 27% of the total documents retrieved by the method, and 7% of the documents retrieved by these reviews, but that contained 75% of the studies included in these reviews. Additional testing suggested that the method may be less appropriate for reviews that combine literature in ways that are not reflected in the literature itself. These results suggest that this ranking method could be useful in a range of information retrieval contexts.
Obesity | 2016
Holly L. Nicastro; Christopher W. Belter; Michael S. Lauer; Sean Coady; Lawrence J. Fine; Catherine M. Loria
To describe and elucidate the time trends of the academic productivity of NHLBIs obesity‐related research funding via bibliometric analysis of 30 years of NHLBI‐supported obesity‐related publications.
PLOS ONE | 2018
Lisa Federer; Christopher W. Belter; Douglas J. Joubert; Alicia A. Livinski; Ya-Ling Lu; Lissa N. Snyders; Holly Thompson
A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis. In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016. Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy. These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing.
Archive | 2018
Christopher W. Belter
Abstract Bibliometrics are increasingly being used not only to evaluate scientific research and demonstrate the impact of scientific research programs, but also too often they are used without a full understanding of how they should be generated or interpreted. This presents an opportunity for librarians and informationists. With their preexisting skills and unique position, informationists are perfectly suited to provide accurate and informed bibliometric services to their customers. In this chapter, I will provide a brief introduction to the science of bibliometrics, make the case for informationist involvement in bibliometric analyses, provide a case study on the bibliometric services program at the National Institutes of Health (NIH) Library, and offer advice to informationists interested in offering bibliometric services to their customers.
Journal of the National Cancer Institute | 2018
Nicole L. Stout; Catherine M. Alfano; Christopher W. Belter; Ralph Nitkin; Alison N. Cernich; Karen Lohmann Siegel; Leighton Chan
Cancer rehabilitation research has accelerated as great attention has focused on improving survivorship care. Recent expert consensus has attempted to prioritize research needs and suggests greater focus on studying physical functioning of survivors. However, no analysis of the publication landscape has substantiated these proposed needs. This manuscript provides an analysis of PubMed indexed articles related to cancer rehabilitation published between 1992 and 2017. A total of 22 171 publications were analyzed using machine learning and text analysis to assess publication metrics, topic areas of emphasis, and their interrelationships through topic similarity networks. Publications have increased at a rate of 136 articles per year. Approximately 10% of publications were funded by the National Institutes of Health institutes and centers, with the National Cancer Institute being the most prominent funder. The greatest volume and rate of publication increase were in the topics of Cognitive and Behavioral Therapies and Psychological Interventions, followed by Depression and Exercise Therapy. Four research topic similarity networks were identified and provide insight on areas of robust publication and notable deficits. Findings suggest that publication emphasis has strongly supported cognitive, behavioral, and psychological therapies; however, studies of functional morbidity and physical rehabilitation research are lacking. Three areas of publication deficits are noted: research on populations outside of breast, prostate, and lung cancers; methods for integrating physical rehabilitation services with cancer care, specifically regarding functional screening and assessment; and physical rehabilitation interventions. These deficits align with the needs identified by expert consensus and support the supposition that future research should emphasize a focus on physical rehabilitation.