Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Gershon is active.

Publication


Featured researches published by Richard Gershon.


Medical Care | 2007

The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years.

David Cella; Susan Yount; Nan Rothrock; Richard Gershon; Karon F. Cook; Bryce B. Reeve; Deborah N. Ader; James F. Fries; Bonnie Bruce; Mattias Rose

Background:The National Institutes of Health (NIH) Patient-Reported Outcomes Measurement Information System (PROMIS) Roadmap initiative (www.nihpromis.org) is a 5-year cooperative group program of research designed to develop, validate, and standardize item banks to measure patient-reported outcomes (PROs) relevant across common medical conditions. In this article, we will summarize the organization and scientific activity of the PROMIS network during its first 2 years. Design:The network consists of 6 primary research sites (PRSs), a statistical coordinating center (SCC), and NIH research scientists. Governed by a steering committee, the network is organized into functional subcommittees and working groups. In the first year, we created an item library and activated 3 interacting protocols: Domain Mapping, Archival Data Analysis, and Qualitative Item Review (QIR). In the second year, we developed and initiated testing of item banks covering 5 broad domains of self-reported health. Results:The domain mapping process is built on the World Health Organization (WHO) framework of physical, mental, and social health. From this framework, pain, fatigue, emotional distress, physical functioning, social role participation, and global health perceptions were selected for the first wave of testing. Item response theory (IRT)-based analysis of 11 large datasets supplemented and informed item-level qualitative review of nearly 7000 items from available PRO measures in the item library. Items were selected for rewriting or creation with further detailed review before the first round of testing in the general population and target patient populations. Conclusions:The NIH PROMIS network derived a consensus-based framework for self-reported health, systematically reviewed available instruments and datasets that address the initial PROMIS domains. Qualitative item research led to the first wave of network testing which began in the second year.


Medical Care | 2007

Psychometric evaluation and calibration of health-related quality of life item banks: Plans for the Patient-Reported Outcomes Measurement Information System (PROMIS)

Bryce B. Reeve; Ron D. Hays; Jakob B. Bjorner; Karon F. Cook; Paul K. Crane; Jeanne A. Teresi; David Thissen; Dennis A. Revicki; David J. Weiss; Ronald K. Hambleton; Honghu Liu; Richard Gershon; Steven P. Reise; Jin Shei Lai; David Cella

Background:The construction and evaluation of item banks to measure unidimensional constructs of health-related quality of life (HRQOL) is a fundamental objective of the Patient-Reported Outcomes Measurement Information System (PROMIS) project. Objectives:Item banks will be used as the foundation for developing short-form instruments and enabling computerized adaptive testing. The PROMIS Steering Committee selected 5 HRQOL domains for initial focus: physical functioning, fatigue, pain, emotional distress, and social role participation. This report provides an overview of the methods used in the PROMIS item analyses and proposed calibration of item banks. Analyses:Analyses include evaluation of data quality (eg, logic and range checking, spread of response distribution within an item), descriptive statistics (eg, frequencies, means), item response theory model assumptions (unidimensionality, local independence, monotonicity), model fit, differential item functioning, and item calibration for banking. Recommendations:Summarized are key analytic issues; recommendations are provided for future evaluations of item banks in HRQOL assessment.


Quality of Life Research | 2007

The future of outcomes measurement: item banking, tailored short-forms, and computerized adaptive assessment

David Cella; Richard Gershon; Jin Shei Lai; Seung W. Choi

The use of item banks and computerized adaptive testing (CAT) begins with clear definitions of important outcomes, and references those definitions to specific questions gathered into large and well-studied pools, or “banks” of items. Items can be selected from the bank to form customized short scales, or can be administered in a sequence and length determined by a computer programmed for precision and clinical relevance. Although far from perfect, such item banks can form a common definition and understanding of human symptoms and functional problems such as fatigue, pain, depression, mobility, social function, sensory function, and many other health concepts that we can only measure by asking people directly. The support of the National Institutes of Health (NIH), as witnessed by its cooperative agreement with measurement experts through the NIH Roadmap Initiative known as PROMIS (www.nihpromis.org), is a big step in that direction. Our approach to item banking and CAT is practical; as focused on application as it is on science or theory. From a practical perspective, we frequently must decide whether to re-write and retest an item, add more items to fill gaps (often at the ceiling of the measure), re-test a bank after some modifications, or split up a bank into units that are more unidimensional, yet less clinically relevant or complete. These decisions are not easy, and yet they are rarely unforgiving. We encourage people to build practical tools that are capable of producing multiple short form measures and CAT administrations from common banks, and to further our understanding of these banks with various clinical populations and ages, so that with time the scores that emerge from these many activities begin to have not only a common metric and range, but a shared meaning and understanding across users. In this paper, we provide an overview of item banking and CAT, discuss our approach to item banking and its byproducts, describe testing options, discuss an example of CAT for fatigue, and discuss models for long term sustainability of an entity such as PROMIS. Some barriers to success include limitations in the methods themselves, controversies and disagreements across approaches, and end-user reluctance to move away from the familiar.


Neurology | 2013

Cognition assessment using the NIH Toolbox

Sandra Weintraub; Sureyya Dikmen; Robert K. Heaton; David S. Tulsky; Philip David Zelazo; Patricia J. Bauer; Noelle E. Carlozzi; Jerry Slotkin; David L. Blitz; Kathleen Wallner-Allen; Nathan A. Fox; Jennifer L. Beaumont; Dan Mungas; Cindy J. Nowinski; Jennifer Richler; Joanne Deocampo; Jacob E. Anderson; Jennifer J. Manly; Beth G. Borosh; Richard Havlik; Kevin P. Conway; Emmeline Edwards; Lisa Freund; Jonathan W. King; Claudia S. Moy; Ellen Witt; Richard Gershon

Vision is a sensation that is created from complex processes and provides us with a representation of the world around us. There are many important aspects of vision, but visual acuity was judged to be the most appropriate vision assessment for the NIH Toolbox for Assessment of Neurological and Behavioral Function, both because of its central role in visual health and because acuity testing is common and relatively inexpensive to implement broadly. The impact of visual impairments on health-related quality of life also was viewed as important to assess, in order to gain a broad view of ones visual function. To test visual acuity, an easy-to-use software program was developed, based on the protocol used by the E-ETDRS. Children younger than 7 years were administered a version with only the letters H, O, T, and V. Reliability and validity of the Toolbox visual acuity test were very good. A 53-item vision-targeted, health-related quality of life survey was also developed.


Pain | 2006

Developing patient-reported outcome measures for pain clinical trials : IMMPACT recommendations

Dennis C. Turk; Robert H. Dworkin; Laurie B. Burke; Richard Gershon; Margaret Rothman; Jane Scott; Robert R. Allen; J. Hampton Atkinson; Julie Chandler; Charles Cleeland; Penny Cowan; Rozalina Dimitrova; Raymond Dionne; John T. Farrar; Jennifer A. Haythornthwaite; Sharon Hertz; Alejandro R. Jadad; Mark P. Jensen; David Kellstein; Robert D. Kerns; Donald C. Manning; Susan Martin; Mitchell B. Max; Michael P. McDermott; Patrick McGrath; Dwight E. Moulin; Turo Nurmikko; Steve Quessy; Srinivasa N. Raja; Bob A. Rappaport

a University of Washington, Seattle, WA 98195, USA b University of Rochester School of Medicine and Dentistry, Rochester, NY, USA c United States Food and Drug Administration, Rockville, MD, USA d Northwestern University, Chicago, IL, USA e Johnson and Johnson, Raritan, NY, USA f AstraZeneca, Wilmington, DE, USA g University of California San Diego, La Jolla, CA, USA h Merck and Company, Blue Bell, PA, USA i University of Texas, M.D. Anderson Cancer Center, USA j American Chronic Pain Association, Rocklin, CA, USA k Allergan, Inc, Irvine, CA, USA l National Institute of Dental and Craniofacial Research, Bethesda, MD, USA m University of Pennsylvania, Philadelphia, PA, USA n Johns Hopkins University, Baltimore, MD, USA o University Health Network and University of Toronto, Toronto, Canada p Novartis Pharmaceuticals, East Hanover, NJ, USA q VA Connecticut Healthcare System, West Haven, CT, USA r Yale University, New Haven, CT, USA s Celgene Corporation, Warren, NJ, USA t Pfizer Global Research and Development, Ann Arbor, MI, USA u Dalhousie University, Halifax, Nova Scotia, Canada v London Regional Cancer Centre, London, Ont., Canada


Journal of Clinical Epidemiology | 2010

Representativeness of the Patient-Reported Outcomes Measurement Information System Internet panel

Honghu Liu; David Cella; Richard Gershon; Jie Shen; Leo S. Morales; William T. Riley; Ron D. Hays

OBJECTIVES To evaluate the Patient-Reported Outcomes Measurement Information System (PROMIS), which collected data from an Internet polling panel, and to compare PROMIS with national norms. STUDY DESIGN AND SETTING We compared demographics and self-rated health of the PROMIS general Internet sample (N=11,796) and one of its subsamples (n=2,196) selected to approximate the joint distribution of demographics from the 2000 U.S. Census, with three national surveys and U.S. Census data. The comparisons were conducted using equivalence testing with weights created for PROMIS by raking. RESULTS The weighted PROMIS population and subsample had similar demographics compared with the 2000 U.S. Census, except that the subsample had a higher percentage of people with higher education than high school. Equivalence testing shows similarity between PROMIS general population and national norms with regard to body mass index, EQ-5D health index (EuroQol group defined descriptive system of health-related quality of life states consisting of five dimensions including mobility, self-care, usual activities, pain/discomfort, anxiety/depression), and self-rating of general health. CONCLUSION Self-rated health of the PROMIS general population is similar to that of existing samples from the general U.S. population. The weighted PROMIS general population is more comparable to national norms than the unweighted population with regard to subject characteristics. The findings suggest that the representativeness of the Internet data is comparable to those from probability-based general population samples.


Neurology | 2012

Neuro-QOL Brief measures of health-related quality of life for clinical research in neurology

David Cella; Jin Shei Lai; Cindy J. Nowinski; David Victorson; Amy H. Peterman; Deborah Miller; Francois Bethoux; Allen W. Heinemann; S. Rubin; Jose E. Cavazos; Anthony T. Reder; Robert Sufit; Tanya Simuni; Gregory L. Holmes; Andrew Siderowf; Valerie Wojna; Rita K. Bode; Natalie McKinney; Tracy Podrabsky; Katy Wortman; Seung W. Choi; Richard Gershon; Nan Rothrock; Claudia S. Moy

Objective: To address the need for brief, reliable, valid, and standardized quality of life (QOL) assessment applicable across neurologic conditions. Methods: Drawing from larger calibrated item banks, we developed short measures (8–9 items each) of 13 different QOL domains across physical, mental, and social health and evaluated their validity and reliability. Three samples were utilized during short form development: general population (Internet-based, n = 2,113); clinical panel (Internet-based, n = 553); and clinical outpatient (clinic-based, n = 581). All short forms are expressed as T scores with a mean of 50 and SD of 10. Results: Internal consistency (Cronbach α) of the 13 short forms ranged from 0.85 to 0.97. Correlations between short form and full-length item bank scores ranged from 0.88 to 0.99 (0.82–0.96 after removing common items from banks). Online respondents were asked whether they had any of 19 different chronic health conditions, and whether or not those reported conditions interfered with ability to function normally. All short forms, across physical, mental, and social health, were able to separate people who reported no health condition from those who reported 1–2 or 3 or more. In addition, scores on all 13 domains were worse for people who acknowledged being limited by the health conditions they reported, compared to those who reported conditions but were not limited by them. Conclusion: These 13 brief measures of self-reported QOL are reliable and show preliminary evidence of concurrent validity inasmuch as they differentiate people based upon number of reported health conditions and whether those reported conditions impede normal function.


Neurology | 2013

NIH Toolbox for Assessment of Neurological and Behavioral Function

Richard Gershon; Molly V. Wagster; Hugh C. Hendrie; Nathan A. Fox; Karon F. Cook; Cindy J. Nowinski

At present, there are many studies that collect information on aspects of neurologic and behavioral function (cognition, sensation, movement, emotion), but with little uniformity among the measures used to capture these constructs. Further, available measures are generally expensive, normed on homogenous nondiverse populations, not easily administered, do not cover the lifespan (or have easily linked pediatric and adult counterparts for the purposes of longitudinal comparison), and not based on the current thinking in the neuroscience community. There is also a paucity of measurement tools to gauge normal children in the motor and sensation domain areas, and many of these measures rely heavily on proxy reporting. Investigators have expressed the need for brief assessment tools that could address these issues and be used as a form of “common currency” across diverse study designs and populations. This ability to assess functionality along a common metric and “crosswalk” across measures is essential to the process of being able to pool data, which is often necessary when a large and diverse sample is needed. When individual studies employ unique assessment batteries, comparisons between studies and combining data from multiple studies can be problematic. The contract for the NIH Toolbox for the Assessment of Neurological and Behavioral Function (www.nihtoolbox.org) was initiated by the NIH Blueprint for Neuroscience Research (www.neuroscienceblueprint.nih.gov) to develop a set of state-of-the-art measurement tools to enhance collection of data in large cohort studies and to advance the biomedical research enterprise.


Archives of Physical Medicine and Rehabilitation | 2011

How item banks and their application can influence measurement practice in rehabilitation medicine: a PROMIS fatigue item bank example.

Jin Shei Lai; David Cella; Seung W. Choi; Doerte U. Junghaenel; Christopher Christodoulou; Richard Gershon; Arthur A. Stone

OBJECTIVE To illustrate how measurement practices can be advanced by using as an example the fatigue item bank (FIB) and its applications (short forms and computerized adaptive testing [CAT]) that were developed through the National Institutes of Health Patient Reported Outcomes Measurement Information System (PROMIS) Cooperative Group. DESIGN Psychometric analysis of data collected by an Internet survey company using item response theory-related techniques. SETTING A U.S. general population representative sample collected through the Internet. PARTICIPANTS Respondents used for dimensionality evaluation of the PROMIS FIB (N=603) and item calibrations (N=14,931). INTERVENTIONS Not applicable. MAIN OUTCOME MEASURES Fatigue items (112) developed by the PROMIS fatigue domain working group, 13-item Functional Assessment of Chronic Illness Therapy-Fatigue, and 4-item Medical Outcomes Study 36-Item Short Form Health Survey Vitality scale. RESULTS The PROMIS FIB version 1, which consists of 95 items, showed acceptable psychometric properties. CAT showed consistently better precision than short forms. However, all 3 short forms showed good precision for most participants in that more than 95% of the sample could be measured precisely with reliability greater than 0.9. CONCLUSIONS Measurement practice can be advanced by using a psychometrically sound measurement tool and its applications. This example shows that CAT and short forms derived from the PROMIS FIB can reliably estimate fatigue reported by the U.S. general population. Evaluation in clinical populations is warranted before the item bank can be used for clinical trials.


Quality of Life Research | 2010

The development of a clinical outcomes survey research application: Assessment CenterSM

Richard Gershon; Nan Rothrock; Rachel T. Hanrahan; Liz Jansky; Mark Harniss; William T. Riley

IntroductionThe National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases.PurposeWeb-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment CenterSM) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments.MethodsJoint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer “scrum” sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release.ResultsAssessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results.ConclusionsThis case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed.

Collaboration


Dive into the Richard Gershon's collaboration.

Top Co-Authors

Avatar

David Cella

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

David S. Tulsky

University of Medicine and Dentistry of New Jersey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry Slotkin

NorthShore University HealthSystem

View shared research outputs
Top Co-Authors

Avatar

Jin Shei Lai

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Mungas

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge