Cory R. Schaffhausen
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cory R. Schaffhausen.
Journal of Mechanical Design | 2015
Cory R. Schaffhausen; Timothy M. Kowalewski
Understanding user needs and preferences is increasingly recognized as a critical component of early stage product development. The large-scale needfinding methods in this series of studies attempt to overcome shortcomings with existing methods, particularly in environments with limited user access. The three studies evaluated three specific types of stimuli to help users describe higher quantities of needs. Users were trained on need statements and then asked to enter as many need statements and optional background stories as possible. One or more stimulus types were presented, including prompts (a type of thought exercise), shared needs, and shared context images. Topics used were general household areas including cooking, cleaning, and trip planning. The results show that users can articulate a large number of needs unaided, and users consistently increased need quantity after viewing a stimulus. A final study collected 1735 needs statements and 1246 stories from 402 individuals in 24 hr. Shared needs and images significantly increased need quantity over other types. User experience (and not expertise) was a significant factor for increasing quantity, but may not warrant exclusive use of high-experience users in practice.
Clinical Transplantation | 2017
Cory R. Schaffhausen; Marilyn J. Bruin; Daryl Chesley; Maureen McBride; Jon J. Snyder; Bertram L. Kasiske; Ajay K. Israni
Transplant patients often seek specific data and statistics to inform medical decision making; however, for many relevant measures, patient‐friendly information is not available. Development of patient‐centered resources should be informed by patient needs. This study used qualitative document research methods to review 678 detailed Scientific Registry of Transplant Recipients (SRTR) entries and summary counts of 55 362 United Network for Organ Sharing (UNOS) entries to provide a better understanding of what was asked and what requests were most common. Incoming call and email logs maintained by SRTR and UNOS were reviewed for 2010‐2015. Patients sought a wide range of information about outcomes, waiting times, program volumes, and willingness to perform transplants in candidates with specific diseases or demographics. Patients and members of their support networks requested explanation of complex information, such as actual‐vs‐expected outcomes, and of general transplant processes, such as registering on the waiting list or becoming a living donor. They sought transplant program data from SRTR and UNOS, but encountered gaps in the information they wanted and occasionally struggled to interpret some data. These findings were used to identify potential gaps in providing program‐specific data and to enhance the SRTR website (www.srtr.org) with more patient‐friendly information.
American Journal of Transplantation | 2018
Andrew Wey; Nicholas Salkowski; Bertram L. Kasiske; Melissa Skeans; Cory R. Schaffhausen; Sally Gustafson; Ajay K. Israni; Jon J. Snyder
To improve accessibility of program‐specific reports to patients, the Scientific Registry of Transplant Recipients released a 5‐tier system for categorizing 1‐year posttransplant program evaluations. Whether this system predicts subsequent posttransplant outcomes at the time patients are waitlisted has been questioned. We investigated the association of tier at listing and the corresponding continuous score used for tier assignment, which ranges from 0 (poor outcomes) to 1 (good outcomes), with eventual 1‐year posttransplant graft survival for candidates listed between July 12, 2011, and June 16, 2014, who underwent transplant before December 31, 2016. One additional tier at listing was associated with better 1‐year posttransplant outcomes in liver (hazard ratio [HR], 0.93; 95% confidence interval [CI], 0.89–0.97) and lung transplant (HR, 0.90; 95% CI, 0.84–0.97) but not kidney (HR, 0.96; 95% CI, 0.92–1.01) or heart transplant (HR, 1.02; 95% CI, 0.93–1.10). In liver and lung transplant, longer time between listing and transplant was associated with stronger protective effects for high‐tier programs. In kidney, liver, and lung transplant, posttransplant evaluations at listing had nonlinear associations with eventual posttransplant outcomes: relatively flat for 5‐tier scores <0.5 and decreasing for scores >0.5. After adjustment for measured recipient and donor risk factors, posttransplant evaluations at listing predicted differences in eventual outcomes in liver and lung transplant, providing useful information to patients.
Journal of Mechanical Design | 2015
Cory R. Schaffhausen; Timothy M. Kowalewski
Collecting data on user needs often results in a surfeit of candidate need statements. Additional analysis is necessary to prioritize a small subset for further consideration. Previous analytic methods have been used for small quantities (often fewer than 75 statements). This study presents a simplified quality metric and online interface appropriate to initially screen and prioritize lists exceeding 500 statements for a single topic or product area. Over 20,000 ratings for 1697 need statements across three common product areas were collected in 6 days. A series of hypotheses were tested: (1) Increasing the quantity of participants submitting needs increases the number of high-quality needs as judged by users; (2) increasing the quantity of needs contributed per person increases the number of high-quality needs as judged by users; and (3) increasing levels of self-rated user expertise will not significantly increase the number of high-quality needs per person. The results provided important quantitative evidence of fundamental relationships between the quantity and quality of need statements. Higher quantities of total needs submitted correlated to higher quantities of high-quality need statements both due to increasing group size and due to increasing counts per person using novel content-rich methods to help users articulate needs. Based on a multivariate analysis, a users topic-specific expertise (self-rated) and experience level (self-rated hours per week) were not significantly associated with increasing quantities of high-quality needs.
ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC/CIE 2015 | 2015
Cory R. Schaffhausen; Timothy M. Kowalewski
Open innovation often enjoys large quantities of submitted content. Yet the need to effectively process such large quantities of content impede the widespread use of open innovation in practice. This article presents an exploration of needs-based open innovation using state-of-the art natural language processing (NLP) algorithms to address existing limitations of exploiting large amounts of incoming data. The Semantic Textual Similarity (STS) algorithms were specifically developed to compare sentence-length text passages and were used to rate the semantic similarity of pairs of text sentences submitted by users of a custom open innovation platform. A total of 341 unique users submitted 1,735 textual problem statements or unmet needs relating to multiple topics: cooking, cleaning, and travel. Scores of equivalence generated by a consensus of ten human evaluators for a subset of the needs provided a benchmark for similarity comparison. The semantic analysis allowed for rapid (1 day per topic), automated screening of redundancy to facilitate identification of quality submissions. In addition, a series of permutation analyses provided critical crowd characteristics for the rates of redundant entries as crowd size increases. The results identify top modern STS algorithms for needfinding. These predicted similarity with Pearson correlations of up to .85 when trained using need-based training data and up to .83 when trained using generalized data. Rates of duplication varied with crowd size and may be approximately linear or appear asymptotic depending on the degree of similarity used as a cutoff. Semantic algorithm performance has shown rapid improvements in recent years. Potential applications to screen duplicates and also to screen highly unique sentences for rapid exploration of a space are discussed.Copyright
American Journal of Transplantation | 2018
Bertram L. Kasiske; Andrew Wey; Nicholas Salkowski; David Zaun; Cory R. Schaffhausen; Ajay K. Israni; Jon J. Snyder
The Scientific Registry of Transplant Recipients (SRTR) is mandated by the National Organ Transplant Act, the Final Rule, and the SRTR contract with the Health Resources and Services Administration to report program‐specific information on the performance of transplant programs. Following a consensus conference in 2012, SRTR developed a new version of the public website to improve public reporting of often complex metrics, including changing from a 3‐tier to a 5‐tier summary metric for first‐year posttransplant survival. After its release in December 2016, the new presentation was moved to a “beta” website to allow collection of additional feedback. SRTR made further improvements and released a new beta website in May 2018. In response to feedback, SRTR added 5‐tier summaries for standardized waitlist mortality and deceased donor transplant rate ratios, along with an indicator of which metric most affects survival after listing. Presentation of results was made more understandable with input from patients and families from surveys and focus groups. Room for improvement remains, including continuing to make the data more useful to patients, deciding what additional data elements should be collected to improve risk adjustment, and developing new metrics that better reflect outcomes most relevant to patients.
Clinical Transplantation | 2018
Cory R. Schaffhausen; Marilyn J. Bruin; Sauman Chu; Andrew Wey; Jon J. Snyder; Bertram L. Kasiske; Ajay K. Israni
The Scientific Registry of Transplant Recipients (SRTR) provides federally mandated program‐specific transplant data to the public. Currently, there is little understanding of how different program measures are prioritized by patients in selecting a program for transplantation. This study recruited 479 transplant advocacy group members from mailing lists and social media of the National Kidney Foundation (NKF), transplant families (TF), and Transplant Recipient International Organization (TRIO). Survey participants identified how many different programs would be reasonable to consider and viewed four measures that have recently been displayed on SRTR public search result websites and six measures not recently displayed and indicated importance on a 5‐point scale. Four hundred two completed the survey (TF = 26; TRIO = 34; NKF = 342). Seventy‐eight percent indicated that considering more than one program would be reasonable. Linear mixed models adjusted for organization, education, and gender. Likert scores for pretransplant (transplant rate) and transplant volume measures were similar and were very or extremely important to over 80% of participants. Posttransplant (survival after transplant) was rated as 0.52 points higher, confidence interval (0.41, 0.64). Results indicate that many patient advocacy group members find a choice between two or more programs reasonable and value multiple measures when assessing programs where they may want to undergo transplantation.
American Journal of Transplantation | 2018
Andrew Wey; Sally Gustafson; Nicholas Salkowski; Bertram L. Kasiske; Melissa Skeans; Cory R. Schaffhausen; Ajay K. Israni; Jon J. Snyder
The Scientific Registry of Transplant Recipients (SRTR) is responsible for understandable reporting of program metrics, including transplant rate, waitlist mortality, and posttransplant outcomes. SRTR developed five‐tier systems for each metric to improve accessibility for the public. We investigated the associations of the five‐tier assignments at listing with all‐cause candidate mortality after listing, for candidates listed July 12, 2011‐June 16, 2014. Transplant rate evaluations with one additional tier were associated with lower mortality after listing in kidney (hazard ratio [HR], 0.930.950.97), liver (HR, 0.870.900.92), and heart (HR, 0.920.961.00) transplantation. For lung transplant patients, mortality after listing was highest at programs with above‐ and below‐average transplant rates and lowest at programs with average transplant rates, suggesting that aggressive acceptance behavior may not always provide a survival benefit. Waitlist mortality evaluations with one additional tier were associated with lower mortality after listing in kidney (HR, 0.940.960.99) transplantation, and posttransplant graft survival evaluations with one additional tier were associated with lower mortality after listing in lung (HR, 0.900.940.98) transplantation. Transplant rate typically had the strongest association with mortality after listing, but the strength of associations differed by organ.
Journal of Medical Devices-transactions of The Asme | 2016
Cory R. Schaffhausen; Timothy M. Kowalewski; Robert M. Sweet
Successfully developing new medical devices, including minimally invasive technologies, is heavily dependent on addressing an appropriate clinical need, “Get [the clinical need] right and you have a chance, get it wrong and all further effort is likely to be wasted” [1, p. 3]. While formalized methods, such as ethnographic research, can be effective when applied to medical technology development, less formal and potentially less effective methods are reported as commonly used [2,3] due to constraints on user accessibility and other factors [2,4]. These informal methods include processes such as “informal expert review” where input on clinical needs is primarily generated through the involvement of a small number of experts [3]. However, studies of unmet needs in non-medical applications have demonstrated that users with varying levels of expertise are equally likely to submit a need statement that is rated as high-quality and that increasing group size consistently leads to a larger number of high-quality need statements [5]. Similarly, a need statement submitted first may be equally likely to be rated high quality as one submitted after a prolonged period of time [6]. Combined, these results suggest that high quality unmet needs can be generated quickly when relying on large crowds. Using an inclusive crowds with diverse expertise levels can be beneficial by increasing the size of the user population. Previous methods for generating need statements from non-medical user groups were adapted and streamlined for use at a conference for minimally invasive surgery (MIS). These new methods were used in a preliminary feasibility study to determine if crowds of clinician conference attendees can be a source for unmet clinical needs in MIS technology.
Journal of Medical Devices-transactions of The Asme | 2014
Cory R. Schaffhausen; Timothy M. Kowalewski
Public and private institutions invest over