Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jia-Wen Guo is active.

Publication


Featured researches published by Jia-Wen Guo.


Academic Medicine | 2015

The Development and Validation of the Interprofessional Attitudes Scale: Assessing the Interprofessional Attitudes of Students in the Health Professions.

Jeffrey Norris; Joan G. Carpenter; Jacqueline Eaton; Jia-Wen Guo; Madeline Lassche; Marjorie A. Pett; Donald K. Blumenthal

Purpose No validated tools assess all four competency domains described in the 2011 report Core Competencies for Interprofessional Collaborative Practice (IPEC Report). The purpose of this study was to develop and validate a tool based on the IPEC Report core competency domains that assesses the interprofessional attitudes of students in the health professions. Method In 2012, an interprofessional team of students and two of the authors developed and administered a survey to students from four colleges and schools at the University of Utah Health Sciences Center (Health, Medicine, Nursing, and Pharmacy). The authors randomly split the responses with complete data into two independent subsets: one for exploratory factor analysis (EFA), the other for confirmatory factor analysis (CFA). They performed these analyses to validate the tool, eliminate redundant questions, and identify subscales. Their analyses focused on aligning tool subscales with the IPEC Report core competencies and demonstrating good construct validity and internal consistency reliability. Results Of 1,549 students invited, 701 (45.3%) responded. The EFA produced a 27-item scale, with five subscales: teamwork, roles, and responsibilities; patient-centeredness; interprofessional biases; diversity and ethics; and community-centeredness (Cronbach alpha coefficients: 0.62 to 0.92). The CFA indicated that the content of the five subscales was consistent with the EFA model. Conclusions The Interprofessional Attitudes Scale (IPAS) is a novel tool that, compared with previous assessment instruments, better reflects current thinking about interprofessional competencies. IPAS should prove useful to health sciences institutions committed to training students to work collaboratively in interprofessional teams.


The Journal of Pain | 2010

Initial Psychometric Properties of the Pain Care Quality Survey (PainCQ).

Susan L. Beck; Gail L. Towsley; Marjorie A. Pett; Ellen M. Lavoie Smith; Jeannine M. Brant; Jia-Wen Guo

UNLABELLED This study examined the psychometric properties of the Pain Care Quality (PainCQ) survey, a new instrument to measure the quality of nursing and interdisciplinary care related to pain management. Hospitalized medical/surgical oncology patients with pain from 3 states completed the 44-item version of the PainCQ survey following completion of a nursing shift. Interdisciplinary items were evaluated over the entire hospital stay; nursing care was evaluated during the previous shift. The sample included 109 patients ranging in age from 20 to 84 (mean = 53.09). The sample was 58.7% female, 88% non-Hispanic white. Principal Axis Factoring with an oblimin rotation was used as factors were correlated. Two scales resulted. The PainCQ-Interdisciplinary scale included 11 items representing 2 constructs and explaining 47.1% of shared item variance: partnership with the health care team (k = 6 items; α = .85) and comprehensive interdisciplinary pain care (k = 5 items; α = .76). The PainCQ-Nursing scale measured three constructs and explained 60.8 % of shared item variance: being treated right (k = 15 items; α = .95), comprehensive nursing pain care (k = 3 items; α = .77), and efficacy of pain management (k =4 items; α = .87). Results supported the internal consistency reliability and structural validity of the PainCQ survey with 33 items. PERSPECTIVE This article presents the psychometric properties of a new tool to measure interdisciplinary and nursing care quality related to pain management from the patients perspective. This tool can be used for research and as a clinical performance measure to monitor and improve quality of care and patient outcomes.


Cancer Nursing | 2014

Reporting quality for abstracts of randomized controlled trials in cancer nursing research.

Jia-Wen Guo; Sarah Iribarren

Background: Abstracts are often used to screen a journal article. Little is known about the reporting quality for abstracts of randomized controlled trials (RCTs) in cancer nursing. Objective: This study evaluated the quality of abstracts reporting published RCTs in cancer nursing and examined factors contributing to better reporting quality. Methods: This is a literature review study. Searches were conducted in PubMed and Cumulative Index to Nursing and Allied Health Literature for English-language RCTs involving cancer nursing. Quality of abstract reporting was assessed and scored based on the Consolidated Standards of Reporting Trial statement for Abstracts (CONSORT for Abstracts). Descriptive statistics, univariate, and multivariate analyses were used to identify predictors of better quality of abstracts. Results: A total of 227 eligible articles published between 1984 and 2010 from 68 journals were identified. On average 46% of the items in the CONSORT for Abstracts were reported. More than 80% of the studies addressed only 6 of the 17 items from the CONSORT for Abstracts. Items concerning randomization, blinding, and intent-to-treat analysis were reported by fewer than 30% of the studies. Publication year, word count, impact factor, number of institutes, corresponding author’s country, and funding accounted for 31.6% to 33.2% of the variance of the quality of abstracts based on a multiple regression model. Conclusions: The reporting quality score of cancer nursing RCT abstracts was suboptimal. Implications for Practice: Strategies to improve abstract reporting quality are needed. To ensure that essential RCT information can be reported in the abstract, journal editors may need to reassess word count limits.


Western Journal of Nursing Research | 2015

Evaluation of a BCMA’s Electronic Medication Administration Record

Nancy Staggers; Sarah Iribarren; Jia-Wen Guo; Charlene R. Weir

Barcode medication administration (BCMA) systems can reduce medication errors, but sociotechnical issues are quite common. Although crucial to nurses’ work, few usability evaluations are available for electronic medication administration record (eMARs) screens. The purpose of this research was to identify current usability problems in the Veterans Administration’s (VA) eMAR/BCMA system and explore how these might affect nurses’ situation awareness (SA). Three expert evaluators used 10 tasks/elements, heuristic evaluation techniques, and explored potential impacts using a SA perspective. The results yielded 99 usability problems categorized into 440 heuristic violations with the largest volume in the category of Match With the Real World. Fifteen usability issues were rated as catastrophic with the Administer/Chart medications task having the most. Situational awareness was affected at all levels, especially at Level 2, Comprehension. Usability problems point to important areas for improvement, because these issues have the potential to affect nurses’ SA, “at a glance” information, nurse productivity, and patient safety.


Nursing Research | 2014

Quality of reporting randomized controlled trials in cancer nursing research.

Jia-Wen Guo; Katherine A. Sward; Susan L. Beck; Nancy Staggers

Background:Results of randomized controlled trials (RCTs) provide high-level evidence for evidence-based practice (EBP). The quality of RCTs has a substantial influence on providing reliable knowledge for EBP. Little is known about the quality of RCT reporting in cancer nursing. Objective:The aim of this study was to assess the quality of reporting in published cancer nursing RCTs from 1984 to 2010. Methods:A total of 227 RCTs in cancer nursing published in English-language journals and indexed in PubMed or Cumulative Index to Nursing and Allied Health Literature were reviewed using the Jadad scale, key methodologic index (KMI), and the Consolidated Standards of Reporting Trials (CONSORT) checklist to assess the quality of reporting methodological aspects of research and the overall quality of reporting RCTs. Results:Adherence to reporting metrics was relatively low, based on the Jadad score (M = 1.94 out of 5, SD = 1.01), KMI scores (M = 0.84 out of 3, SD = .87), and adherence to CONSORT checklist items (M =16.92 out of 37, SD = 4.03). Only 11 of 37 items in the CONSORT checklist were reported in 80% or more of the studies reviewed. The quality of reporting showed some improvement over time. Discussion:Adherence to reporting metrics for cancer nursing RCTs was suboptimal, and further efforts are needed to improve both methodology reporting and overall reporting. Journals are encouraged to adopt the CONSORT checklist to influence the quality of RCT reports.


Health Services Research | 2013

Confirmatory Factor Analysis of the Pain Care Quality Surveys (PainCQ

Marjorie A. Pett; Susan L. Beck; Jia-Wen Guo; Gail L. Towsley; Jeannine M. Brant; Ellen M. Lavoie Smith; Gary W. Donaldson

OBJECTIVE To examine the reliability and validity and to decrease the battery of items in the Pain Care Quality (PainCQ(©) ) Surveys. DATA SOURCES/STUDY SETTING Patient-reported data were collected prospectively from 337 hospitalized adult patients with pain on medical/surgical oncology units in four hospitals in three states. STUDY DESIGN This methodological study used a cross-sectional survey design. Each consenting patient completed two PainCQ(©) Surveys, the Brief Pain Inventory-Short Form, and demographic questions. Clinical data were extracted from the medical record. DATA COLLECTION/EXTRACTION METHODS All data were double entered into a Microsoft Access database, cleaned, and then extracted into SPSS, AMOS, and Mplus for analysis. PRINCIPAL FINDINGS Confirmatory factor analysis using Structural Equation Modeling supported the initial factor structure. Modification indices guided decisions that resulted in a superior, parsimonious model for the PainCQ-Interdisciplinary Care Survey (six items, two subscales) and the PainCQ-Nursing Care Survey (14 items, three subscales). Cronbachs alpha coefficients all exceeded .80. CONCLUSIONS Cumulative evidence supports the reliability and validity of the companion PainCQ(©) Surveys in hospitalized patients with pain in the oncology setting. The tools may be relevant in both clinical research and quality improvement. Future research is recommended in other populations, settings, and with more diverse groups.


Nursing Research | 2013

Linking Clinical Research Data to Population Databases

Linda S. Edelman; Jia-Wen Guo; Alison Fraser; Susan L. Beck

Background:Most clinical nursing research is limited to funded study periods. However, if clinical research data can be linked to population databases, researchers can study relationships between study measures and poststudy long-term outcomes. Objectives:The objective was to describe the feasibility of linking research participant data to data from population databases in order to study long-term poststudy outcomes. As an exemplar, participants were linked from a completed oncology nursing research trial to outcomes data in two state population databases. Methods:Participant data from a previously completed symptom management study were linked to the Utah Population Database and the Utah Emergency Department Database. The final data set contained demographic, cancer diagnosis and treatment and baseline data from the oncology study linked to poststudy long-term outcomes from the population databases. Results:One hundred twenty-nine of 144 (89.6%) study were linked to their individual data in the population databases. Of those, 73% were linked to hospitalization records, 60% were linked to emergency department visit records, and 28% were identified as having died. Discussion:Study participant data were successfully linked to population databases data to describe poststudy emergency department visit and hospitalization numbers and mortality. The results suggest that data linkage success can be improved if researchers include linkage and human subjects protection plans related to linkage in the initial study design.


Cin-computers Informatics Nursing | 2015

Leaders in Nursing Informatics Education and Research: The University of Utah Celebrates 25 Years.

Mollie R. Cummins; Katherine A. Sward; Jia-Wen Guo

The University of Utah celebrates 25 years of specialty education in nursing informatics (NI) this year and remains one of themost highly rankedNI programs in theUnited States. The program has made major contributions to the field of NI by educating NI leaders and incubating cutting-edge NI research. Of particular note, the University of Utah College of Nursing pioneered interprofessional informatics education in partnership with the School of Medicine’s Department of Biomedical Informatics. As we celebrate this remarkable milestone, we pay tribute to the pioneers and innovators of NI and look to the future of NI education and research.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2013

Board 391 - Research Abstract Development and Construct Validation of the Interprofessional Attitudes Scale (IPAS) for Assessing the Impact of Interprofessional Simulations (Submission #1233)

Jeffrey Norris; Joan G. Carpenter; Jacqueline Eaton; Jia-Wen Guo; Madeline Lassche; Marjorie A. Pett; Donald K. Blumenthal

Introduction/Background Development of interprofessional competencies is essential in the training of practice-ready health professionals. Many interprofessional educational (IPE) programs use simulations to teach collaborative team skills. These IPE simulations should facilitate students meeting the recently published IPEC Core Competencies.1 However, determining whether this occurs is difficult since no validated tools currently exist to assess all four IPEC Core Competency domains. The goals of this study were to develop, validate and test a novel scale to assess interprofessional attitudes of students in the health professions. This scale was designed to incorporate the core competency domains defined in the IPEC Report and to be used in assessing the impact of IPE simulations on health professional student attitudes towards interprofessionalism. Methods An online survey containing 42 questions based in part on the Readiness for Interprofessional Learning Scale (RIPLS)2-4 and new questions based on the IPEC Core Competencies was developed and administered in 2012 to a diverse group of 1549 health professional students from the University of Utah Health Sciences (UUHS). The UUHS is an academic health center composed of four schools and colleges (Health, Medicine, Nursing, and Pharmacy), including nursing, medical, pharmacy, medical laboratory science, nutrition, occupational therapy, public health and physician assistant students. Analyses were performed to validate the assessment tool, eliminate redundant questions and cluster questions into subscales. Results The responses from the 42 item online survey tool were evaluated to assess construct validity and internal consistency reliability. A survey response rate of 45% (n=701) was obtained. After removing incomplete survey responses, a dataset consisting of 678 responses was randomly split into two datasets which were independently analyzed using exploratory factor analysis (EFA, n=342) and confirmatory factor analysis (CFA, n=336). The result of the EFA was a 27 item scale that we named the Interprofessional Attitudes Scale (IPAS). The EFA identified five subscales with Cronbach’s alpha coefficients ranging from 0.62 to 0.92 (Table 1). The CFA indicated the content of the five subscales was consistent with the EFA model. The collection of validated survey questions is being incorporated into surveys administered to students before and after IPE simulations to evaluate the impact of the simulations on interprofessional attitudes. These IPE simulations are designed to train students to work collaboratively in interprofessional teams and include students from all four schools and colleges at the UUHS at various levels of training. The Results of these on-going assessments using the IPAS will be described in more detail during the presentation. Conclusion We have created and validated an assessment tool, the IPAS, which contains items that reflect current interprofessional competencies. The IPAS is being used to assess the impact of IPE simulations on student’s attitudes towards working collaboratively in interprofessional teams. References 1. Interprofessional Education Collaborative Expert Panel. Core competencies for interprofessional collaborative practice: Report of an expert panel. Washington, D.C2011: http://www.aacn.nche.edu/education-resources/IPECReport.pdf. 2. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999; 33:95-100. 3. Reid R, Bruce D, Allstaff K, McLernan D. Validating the readiness for interpersonal learning scale (RIPLS) in the post graduate context. Med Educ. 2006; 40: 415-422. 4. Williams B, Brown T, Boyle M. Construct validation of the readiness for interprofessional learning scale: A Rasch and factor analysis. J Interprof Care. 2012; 26: 326-332. Disclosures CDC Experience in 2009-2010: a year fellowship in epidemiology at the CDC, funds provided to the CDC Foundation by Pfizer Elsevier Publishing Co.


international conference on human centered design held as part of hci international | 2009

Clinical System Design Considerations for Critical Handoffs

Nancy Staggers; Jia-Wen Guo; Jacquelyn W. Blaz; Bonnie Mowinski Jennings

Change of shift report (CoSR) is a nurse-to-nurse communication event (handoff) that could potentially result in missed or incomplete information, time inefficiencies and patient errors. Although technology is touted as being amenable for this process, researchers have not yet evaluated how CoSR might be supported through computerization. This paper summarizes past research on this critical transition, describes the results of a qualitative study for shift report content on medical and surgical units in the U.S. and then outlines requirements for computerized support of the process. Three potential CoSR designs are provided and discussed: a patient summary screen, a personally-tailored design for nurses, and a problem-oriented design. Benefits and disadvantages of each are proposed.

Collaboration


Dive into the Jia-Wen Guo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge