Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Hunsinger is active.

Publication


Featured researches published by Matthew Hunsinger.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Female peers in small work groups enhance women's motivation, verbal participation, and career aspirations in engineering

Nilanjana Dasgupta; Melissa McManus Scircle; Matthew Hunsinger

Significance Advances in science, technology, engineering, and mathematics are critical to the American economy and require a robust workforce. The scarcity of women in this workforce is a well-recognized problem, but data-driven solutions to this problem are less common. We provide experimental evidence showing that gender composition of small groups in engineering has a substantial impact on undergraduate women’s persistence. Women participate more actively in engineering groups when members are mostly female vs. mostly male or in equal gender proportions. Women feel less anxious in female-majority groups vs. minority groups, especially as first-year students. Gender-parity groups are less effective than female-majority groups in promoting verbal participation. Female peers protect women’s confidence and engineering career aspirations despite masculine stereotypes about engineering. For years, public discourse in science education, technology, and policy-making has focused on the “leaky pipeline” problem: the observation that fewer women than men enter science, technology, engineering, and mathematics fields and more women than men leave. Less attention has focused on experimentally testing solutions to this problem. We report an experiment investigating one solution: we created “microenvironments” (small groups) in engineering with varying proportions of women to identify which environment increases motivation and participation, and whether outcomes depend on students’ academic stage. Female engineering students were randomly assigned to one of three engineering groups of varying sex composition: 75% women, 50% women, or 25% women. For first-years, group composition had a large effect: women in female-majority and sex-parity groups felt less anxious than women in female-minority groups. However, among advanced students, sex composition had no effect on anxiety. Importantly, group composition significantly affected verbal participation, regardless of women’s academic seniority: women participated more in female-majority groups than sex-parity or female-minority groups. Additionally, when assigned to female-minority groups, women who harbored implicit masculine stereotypes about engineering reported less confidence and engineering career aspirations. However, in sex-parity and female-majority groups, confidence and career aspirations remained high regardless of implicit stereotypes. These data suggest that creating small groups with high proportions of women in otherwise male-dominated fields is one way to keep women engaged and aspiring toward engineering careers. Although sex parity works sometimes, it is insufficient to boost women’s verbal participation in group work, which often affects learning and mastery.


Pain | 2014

Reporting of missing data and methods used to accommodate them in recent analgesic clinical trials: ACTTION systematic review and recommendations

Jennifer S. Gewandter; Michael P. McDermott; Andrew McKeown; Shannon M. Smith; Mark R. Williams; Matthew Hunsinger; John T. Farrar; Dennis C. Turk; Robert H. Dworkin

Summary This article reports deficiencies in reporting of missing data and methods to accommodate them, reviews methods to accommodate missing data that were recommended by statisticians and regulators, and provides recommendations for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Missing data in clinical trials can bias estimates of treatment effects. Statisticians and government agencies recommend making every effort to minimize missing data. Although statistical methods are available to accommodate missing data, their validity depends on often untestable assumptions about why the data are missing. The objective of this study was to assess the frequency with which randomized clinical trials published in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) reported strategies to prevent missing data, the number of participants who completed the study (ie, completers), and statistical methods to accommodate missing data. A total of 161 randomized clinical trials investigating treatments for pain, published between 2006 and 2012, were included. Approximately two‐thirds of the trials reported at least 1 method that could potentially minimize missing data, the most common being allowance of concomitant medications. Only 61% of the articles explicitly reported the number of patients who were randomized and completed the trial. Although only 14 articles reported that all randomized participants completed the study, fewer than 50% of the articles reported a statistical method to accommodate missing data. Last observation carried forward imputation was used most commonly (42%). Thirteen articles reported more than 1 method to accommodate missing data; however, the majority of methods, including last observation carried forward, were not methods currently recommended by statisticians. Authors, reviewers, and editors should prioritize proper reporting of missing data and appropriate use of methods to accommodate them so as to improve the deficiencies identified in this systematic review.


Pain | 2014

Reporting of primary analyses and multiplicity adjustment in recent analgesic clinical trials: ACTTION systematic review and recommendations

Jennifer S. Gewandter; Shannon M. Smith; Andrew McKeown; Laurie B. Burke; Sharon Hertz; Matthew Hunsinger; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin

Summary Deficiencies in reporting of primary analyses and multiplicity adjustment methods are summarized, and recommendations are provided for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Performing multiple analyses in clinical trials can inflate the probability of a type I error, or the chance of falsely concluding a significant effect of the treatment. Strategies to minimize type I error probability include prespecification of primary analyses and statistical adjustment for multiple comparisons, when applicable. The objective of this study was to assess the quality of primary analysis reporting and frequency of multiplicity adjustment in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and PAIN®). A total of 161 randomized controlled trials investigating noninvasive pharmacological treatments or interventional treatments for pain, published between 2006 and 2012, were included. Only 52% of trials identified a primary analysis, and only 10% of trials reported prespecification of that analysis. Among the 33 articles that identified a primary analysis with multiple testing, 15 (45%) adjusted for multiplicity; of those 15, only 2 (13%) reported prespecification of the adjustment methodology. Trials in clinical pain conditions and industry‐sponsored trials identified a primary analysis more often than trials in experimental pain models and non‐industry‐sponsored trials, respectively. The results of this systematic review demonstrate deficiencies in the reporting and possibly the execution of primary analyses in published analgesic trials. These deficiencies can be rectified by changes in, or better enforcement of, journal policies pertaining to requirements for the reporting of analyses of clinical trial data.


The Journal of Pain | 2015

Quality of Pain Intensity Assessment Reporting: ACTTION Systematic Review and Recommendations

Shannon M. Smith; Matthew Hunsinger; Andrew McKeown; Melissa Parkhurst; Robert R. Allen; Stephen Kopko; Yun Lu; Hilary D. Wilson; Laurie B. Burke; Paul J. Desjardins; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin

UNLABELLED Pain intensity assessments are used widely in human pain research, and their transparent reporting is crucial to interpreting study results. In this systematic review, we examined reporting of human pain intensity assessments and related elements (eg, administration frequency, time period assessed, type of pain) in all empirical pain studies with adult participants in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) between January 2011 and July 2012. Of the 262 articles identified, close to one-quarter (24%) ambiguously reported the pain intensity assessment. Elements related to the pain intensity assessment were frequently not reported: 31% did not identify the time period participants were asked to rate, 43% failed to report the type of pain intensity rated, and 58% did not report the specific location or pain condition rated. No differences were observed between randomized clinical trials and experimental (eg, studies involving experimental manipulation without random group assignment and blinding) and observational studies in reporting quality. The ability to understand study results, and to compare results between studies, is compromised when pain intensity assessments are not fully reported. Recommendations are presented regarding key details for investigators to consider when conducting and reporting pain intensity assessments in human adults. PERSPECTIVE This systematic review demonstrates that publications of pain research often incompletely report pain intensity assessments and their details (eg, administration frequency, type of pain). Failure to fully report details of pain intensity assessments creates ambiguity in interpreting research results. Recommendations are proposed to increase transparent reporting.


Pain | 2016

Pain intensity rating training: results from an exploratory study of the ACTTION PROTECCT system.

Shannon M. Smith; Dagmar Amtmann; Robert L. Askew; Jennifer S. Gewandter; Matthew Hunsinger; Mark P. Jensen; Michael P. McDermott; Kushang V. Patel; Mark R. Williams; Bacci Ed; Burke Lb; Chambers Ct; Stephen A. Cooper; Penny Cowan; Paul J. Desjardins; Mila Etropolski; John T. Farrar; Ian Gilron; Huang Iz; Katz M; Robert D. Kerns; Ernest A. Kopecky; Bob A. Rappaport; Malca Resnick; Geertrui F. Vanhove; Veasley C; Mark Versavel; Ajay D. Wasan; Dennis C. Turk; Robert H. Dworkin

Abstract Clinical trial participants often require additional instruction to prevent idiosyncratic interpretations regarding completion of patient-reported outcomes. The Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public–private partnership developed a training system with specific, standardized guidance regarding daily average pain intensity ratings. A 3-week exploratory study among participants with low-back pain, osteoarthritis of the knee or hip, and painful diabetic peripheral neuropathy was conducted, randomly assigning participants to 1 of 3 groups: training with human pain assessment (T+); training with automated pain assessment (T); or no training with automated pain assessment (C). Although most measures of validity and reliability did not reveal significant differences between groups, some benefit was observed in discriminant validity, amount of missing data, and ranking order of least, worst, and average pain intensity ratings for participants in Group T+ compared with the other groups. Prediction of greater reliability in average pain intensity ratings in Group T+ compared with the other groups was not supported, which might indicate that training produces ratings that reflect the reality of temporal pain fluctuations. Results of this novel study suggest the need to test the training system in a prospective analgesic treatment trial.


Journal of Evidence-Based Complementary & Alternative Medicine | 2013

Mindfulness-Based Stress Reduction and Change in Health-Related Behaviors

Elena Salmoirago-Blotcher; Matthew Hunsinger; Lucas Morgan; Daniel Fischer; James Carmody

How best to support change in health-related behaviors is an important public health challenge. The role of mindfulness training in this process has received limited attention. We sought to explore whether mindfulness training is associated with changes in health-related behaviors. The Health Behaviors Questionnaire was used to obtain self-reported data on dietary behaviors, drinking, smoking, physical activity, and sleep quality before and after attendance at an 8-week Mindfulness-Based Stress Reduction program. T-tests for paired data and χ2 tests were used to compare pre–post intervention means and proportions of relevant variables with P = .05 as level of significance. Participants (n = 174; mean age 47 years, range 19-68; 61% female) reported significant improvements in dietary behaviors and sleep quality. Partial changes were seen in physical activity but no changes in smoking and drinking habits. In conclusion, mindfulness training promotes favorable changes in selected health-related behaviors deserving further study through randomized controlled trials.


American Journal of Lifestyle Medicine | 2018

A Brief Mindfulness-Based Intervention for Primary Care Physicians A Pilot Randomized Controlled Trial

David A. Schroeder; Elizabeth Stephens; Dharmakaya Colgan; Matthew Hunsinger; Dan Rubin; Michael S. Christopher

Primary care physicians experience high rates of burnout, which results in diminished quality of life, poorer quality of care, and workforce attrition. In this randomized controlled trial, our primary aim was to examine the impact of a brief mindfulness-based intervention (MBI) on burnout, stress, mindfulness, compassion, and resilience among physicians. A total of 33 physicians completed the baseline assessment and were randomized to the Mindful Medicine Curriculum (MMC; n = 17) or waitlist control group (n = 16). Participants completed self-report measures at baseline, post-MBI, and 3-month follow-up. We also analyzed satisfaction with doctor communication (DCC) and overall doctor rating (ODR) data from patients of the physicians in our sample. Participants in the MMC group reported significant improvements in stress (P < .001), mindfulness (P = .05), emotional exhaustion (P = .004), and depersonalization (P = .01) whereas in the control group, there were no improvements on these outcomes. Although the MMC had no impact on patient-reported DCC or ODR, among the entire sample at baseline, DCC and ODR were significantly correlated with several physician outcomes, including resilience and personal achievement. Overall, these findings suggest that a brief MBI can have a positive impact on physician well-being and potentially enhance patient care.


Pain | 2014

adverse event reporting in nonpharmacologic, noninterventional pain clinical trials: Acttion systematic review

Matthew Hunsinger; Shannon M. Smith; Daniel Rothstein; Andrew McKeown; Melissa Parkhurst; Sharon Hertz; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin

&NA; The results of this systematic review suggest that adverse event reporting in trials examining nonpharmacologic, noninterventional pain treatments needs to improve. &NA; Assessment of treatment safety is 1 of the primary goals of clinical trials. Organizations and working groups have created reporting guidelines for adverse events (AEs). Previous research examining AE reporting for pharmacologic clinical trials of analgesics in major pain journals found many reporting inadequacies, suggesting that analgesic trials are not adhering to existing AE reporting guidelines. The present systematic review documented AE reporting in 3 main pain journals for nonpharmacologic, noninterventional (NP/NI) trials examining pain treatments. To broaden our pool of nonpharmacologic trials, we also included trials examining acupuncture, leech therapy, and noninvasive stimulation techniques (eg, transcutaneous electrical nerve stimulation). We documented AE reporting at 2 levels of specificity using coding manuals based on the Consolidated Standards of Reporting Trials (CONSORT) harms reporting standards and Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) AE reporting checklist. We identified a number of inadequacies in AE reporting across the 3 journals. For example, using the ACTTION coding manual, we found that less than one‐half of the trials reported specific AE assessment methods; approximately one‐third of the trials reported withdrawals due to AEs for each study arm; and about one‐fourth of the trials reported all specific AEs. We also examined differences in AE reporting across several trial characteristics, finding that AE reporting was generally more detailed in trials with patients versus those using healthy volunteers undergoing experimentally evoked pain. These results suggest that investigators conducting and reporting NP/NI clinical trials are not adequately describing the assessment and occurrence of AEs.


Journal of Clinical Epidemiology | 2016

Deficiencies in reporting of statistical methodology in recent randomized trials of nonpharmacologic pain treatments: ACTTION systematic review

Jordan D. Dworkin; Andrew McKeown; John T. Farrar; Ian Gilron; Matthew Hunsinger; Robert D. Kerns; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin; Jennifer S. Gewandter

OBJECTIVE The goal of this study was to assess the quality of reporting of statistical methods in randomized clinical trials (RCTs), including identification of primary analyses, missing data accommodation, and multiplicity adjustment, in studies of nonpharmacologic, noninterventional pain treatments (e.g., physical therapy, cognitive behavioral therapy, acupuncture, and massage). STUDY DESIGN Systematic review of 101 articles reporting RCTs of pain treatments that were published between January 2006 and June 2013 in the European Journal of Pain, the Journal of Pain, and Pain. SETTING Systematic review. RESULTS Sixty-two percent of studies identified a primary outcome variable, 46% identified a primary analysis, and of those with multiple primary analyses, only 21% adjusted for multiplicity. Slightly over half (55%) of studies reported using at least one method to accommodate missing data. Only four studies reported prespecifying at least one of these four study methods. CONCLUSION This review identified deficiencies in the reporting of primary analyses and methods to adjust for multiplicity and accommodate missing data in articles disseminating results of nonpharmacologic, noninterventional trials. Investigators should be encouraged to indicate whether their analyses were prespecified and to clearly and completely report statistical methods in clinical trial publications to maximize the interpretability of trial results.


Pain | 2014

Disclosure of authorship contributions in analgesic clinical trials and related publications: ACTTION systematic review and recommendations.

Matthew Hunsinger; Shannon M. Smith; Andrew McKeown; Melissa Parkhurst; Robert A. Gross; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin

0304-3959/

Collaboration


Dive into the Matthew Hunsinger's collaboration.

Top Co-Authors

Avatar

Dennis C. Turk

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bob A. Rappaport

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allison H. Lin

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge