Vanessa Emille Carvalho de Sousa
University of Illinois at Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Hotspot
Dive into the research topics where Vanessa Emille Carvalho de Sousa is active.
Publication
Featured researches published by Vanessa Emille Carvalho de Sousa.
Journal of the American Medical Informatics Association | 2016
Karen Dunn Lopez; Sheila M. Gephart; Rebecca Raszewski; Vanessa Emille Carvalho de Sousa; Lauren E Shehorn; Joanna Abraham
Objective: To report on the state of the science of clinical decision support (CDS) for hospital bedside nurses. Materials and Methods: We performed an integrative review of qualitative and quantitative peer-reviewed original research studies using a structured search of PubMed, Embase, Cumulative Index to Nursing and Applied Health Literature (CINAHL), Scopus, Web of Science, and IEEE Xplore (Institute of Electrical and Electronics Engineers Xplore Digital Library). We included articles that reported on CDS targeting bedside nurses and excluded in stages based on rules for titles, abstracts, and full articles. We extracted research design and methods, CDS purpose, electronic health record integration, usability, and process and patient outcomes. Results: Our search yielded 3157 articles. After removing duplicates and applying exclusion rules, 28 articles met the inclusion criteria. The majority of studies were single-site, descriptive or qualitative (43%) or quasi-experimental (36%). There was only 1 randomized controlled trial. The purpose of most CDS was to support diagnostic decision-making (36%), guideline adherence (32%), medication management (29%), and situational awareness (25%). All the studies that included process outcomes (7) and usability outcomes (4) and also had analytic procedures to detect changes in outcomes demonstrated statistically significant improvements. Three of 4 studies that included patient outcomes and also had analytic procedures to detect change showed statistically significant improvements. No negative effects of CDS were found on process, usability, or patient outcomes. Discussion and Conclusions: Clinical support systems targeting bedside nurses have positive effects on outcomes and hold promise for improving care quality; however, this research is lagging behind studies of CDS targeting medical decision-making in both volume and level of evidence.
Journal of Nursing Care Quality | 2016
Karen Dunn Lopez; Diana J. Wilkie; Yingwei Yao; Vanessa Emille Carvalho de Sousa; Alessandro Febretti; Janet Stifter; Andrew E. Johnson; Gail M. Keenan
We present findings of a comparative study of numeracy and graph literacy in a representative group of 60 practicing nurses. This article focuses on a fundamental concern related to the effectiveness of numeric information displayed in various features in the electronic health record during clinical workflow. Our findings suggest the need to consider numeracy and graph literacy when presenting numerical information as well as the potential for tailoring numeric display types to an individuals cognitive strengths.
Cin-computers Informatics Nursing | 2015
Vanessa Emille Carvalho de Sousa; Karen Dunn Lopez; Alessandro Febretti; Janet Stifter; Yingwei Yao; Andrew E. Johnson; Diana J. Wilkie; Gail M. Keenan
Our long-term goal was to ensure nurse clinical decision support works as intended before full deployment in clinical practice. As part of a broader effort, this pilot project explored factors influencing acceptance/nonacceptance of eight clinical decision support suggestions displayed in an electronic health record–based nursing plan of care software prototype. A diverse sample of 21 nurses participated in this high-fidelity clinical simulation experience and completed a questionnaire to assess reasons for accepting/not accepting the clinical decision support suggestions. Of 168 total suggestions displayed during the experiment (eight for each of the 21 nurses), 123 (73.2%) were accepted, and 45 (26.8%) were not accepted. The mode number of acceptances by nurses was seven of eight, with only two of 21 nurses accepting all. The main reason for clinical decision support acceptance was the nurse’s belief that the suggestions were good for the patient (100%), with other features providing secondary reinforcement. Reasons for nonacceptance were less clear, with fewer than half of the subjects indicating low confidence in the evidence. This study provides preliminary evidence that high-quality simulation and targeted questionnaires about specific clinical decision support selections offer a cost-effective means for testing before full deployment in clinical practice.
Applied Clinical Informatics | 2017
Vanessa Emille Carvalho de Sousa; K. Dunn Lopez
BACKGROUNDnThe use of e-health can lead to several positive outcomes. However, the potential for e-health to improve healthcare is partially dependent on its ease of use. In order to determine the usability for any technology, rigorously developed and appropriate measures must be chosen.nnnOBJECTIVESnTo identify psychometrically tested questionnaires that measure usability of e-health tools, and to appraise their generalizability, attributes coverage, and quality.nnnMETHODSnWe conducted a systematic review of studies that measured usability of e-health tools using four databases (Scopus, PubMed, CINAHL, and HAPI). Non-primary research, studies that did not report measures, studies with children or people with cognitive limitations, and studies about assistive devices or medical equipment were systematically excluded. Two authors independently extracted information including: questionnaire name, number of questions, scoring method, item generation, and psychometrics using a data extraction tool with pre-established categories and a quality appraisal scoring table.nnnRESULTSnUsing a broad search strategy, 5,558 potentially relevant papers were identified. After removing duplicates and applying exclusion criteria, 35 articles remained that used 15 unique questionnaires. From the 15 questionnaires, only 5 were general enough to be used across studies. Usability attributes covered by the questionnaires were: learnability (15), efficiency (12), and satisfaction (11). Memorability (1) was the least covered attribute. Quality appraisal showed that face/content (14) and construct (7) validity were the most frequent types of validity assessed. All questionnaires reported reliability measurement. Some questionnaires scored low in the quality appraisal for the following reasons: limited validity testing (7), small sample size (3), no reporting of user centeredness (9) or feasibility estimates of time, effort, and expense (7).nnnCONCLUSIONSnExisting questionnaires provide a foundation for research on e-health usability. However, future research is needed to broaden the coverage of the usability attributes and psychometric properties of the available questionnaires.
Western Journal of Nursing Research | 2017
Vanessa Emille Carvalho de Sousa; Jeffrey Matson; Karen Dunn Lopez
Questionnaire development involves rigorous testing to ensure reliability and validity. Due to time and cost constraints of developing new questionnaires, researchers often adapt existing questionnaires to better fit the purpose of their study. However, the effect of such adaptations is unclear. We conducted cognitive interviews as a method to evaluate the understanding of original and adapted questionnaire items to be applied in a future study. The findings revealed that all subjects (a) comprehended the original and adapted items differently, (b) changed their scores after comparing the original to the adapted items, and (c) were unanimous in stating that the adapted items were easier to understand. Cognitive interviewing allowed us to assess the interpretation of adapted items in a useful and efficient manner before use in data collection.
International Journal of Medical Informatics | 2018
Andrew D. Boyd; Karen Dunn Lopez; Camillo Lugaresi; Tamara Gonçalves Rezende Macieira; Vanessa Emille Carvalho de Sousa; Sabita Acharya; Abhinaya Balasubramanian; Khawllah Roussi; Gail M. Keenan; Yves A. Lussier; Jianrong “John” Li; Michel Burton; Barbara Di Eugenio
Background Physician and nurses have worked together for generations; however, their language and training are vastly different; comparing and contrasting their work and their joint impact on patient outcomes is difficult in light of this difference. At the same time, the EHR only includes the physician perspective via the physician-authored discharge summary, but not nurse documentation. Prior research in this area has focused on collaboration and the usage of similar terminology. Objective The objective of the study is to gain insight into interprofessional care by developing a computational metric to identify similarities, related concepts and differences in physician and nurse work. Methods 58 physician discharge summaries and the corresponding nurse plans of care were transformed into Unified Medical Language System (UMLS) Concept Unique Identifiers (CUIs). MedLEE, a Natural Language Processing (NLP) program, extracted “physician terms” from free-text physician summaries. The nursing plans of care were constructed using the HANDS© nursing documentation software. HANDS© utilizes structured terminologies: nursing diagnosis (NANDA-I), outcomes (NOC), and interventions (NIC) to create “nursing terms”. The physician’s and nurse’s terms were compared using the UMLS network for relatedness, overlaying the physician and nurse terms for comparison. Our overarching goal is to provide insight into the care, by innovatively applying graph algorithms to the UMLS network. We reveal the relationships between the care provided by each professional that is specific to the patient level. Results We found that only 26% of patients had synonyms (identical UMLS CUIs) between the two professions’ documentation. On average, physicians’ discharge summaries contain 27 terms and nurses’ documentation, 18. Traversing the UMLS network, we found an average of 4 terms related (distance less than 2) between the professions, leaving most concepts as unrelated between nurse and physician care. Conclusion Our hypothesis that physician’s and nurse’s practice domains are markedly different is supported by the preliminary, quantitative evidence we found. Leveraging the UMLS network and graph traversal algorithms, allows us to compare and contrast nursing and physician care on a single patient, enabling a more complete picture of patient care. We can differentiate professional contributions to patient outcomes and related and divergent concepts by each profession.
Applied Nursing Research | 2016
Lívia Maia Pascoal; Jéssica Pereira Alves de Carvalho; Vanessa Emille Carvalho de Sousa; Francisco Dimitre Rodrigo Pereira Santos; Pedro Martins Lima Neto; Simony Fabíola Lopes Nunes; Marcos Venícios de Oliveira Lopes
AIMnThe aim of this study is to analyze the accuracy of the defining characteristics of ineffective airway clearance (IAC) in patients after thoracic and upper abdominal surgery.nnnBACKGROUNDnAlthough numerous studies have described the most prevalent respiratory NANDA-I diagnoses, only few investigates the precision of nursing assessments.nnnMETHODSnA cross-sectional study was conducted with 192 patients in a surgical clinic. Accuracy measures were obtained by the latent class analysis method.nnnRESULTSnIAC was present in 46.73% of the sample. The defining characteristics with better predictive capacity were changes in respiratory rate and changes in respiratory rhythm. However, other defining characteristics also had high specificity, such as restlessness, cyanosis, excessive sputum, wide-eyed, orthopnea, adventitious breathing sounds, ineffective cough, and difficulty vocalizing.nnnCONCLUSIONnResults can contribute to the improvement of nursing assessments by providing information about the key clinical indicators of IAC.
International Journal of Nursing Knowledge | 2018
Janet Stifter; Vanessa Emille Carvalho de Sousa; Alessandro Febretti; Karen Dunn Lopez; Andrew E. Johnson; Yingwei Yao; Gail M. Keenan; Diana J. Wilkie
PURPOSEnTo determine the acceptability, usefulness, and ease of use for four nursing clinical decision support interface prototypes.nnnMETHODSnIn a simulated hospital environment, 60 registered nurses (48 female; mean agexa0=xa033.7xa0±xa010.8; mean years of experiencexa0=xa08.1xa0±xa09.7) participated in a randomized study with four study groups. Measures included acceptability, usefulness, and ease of use scales.nnnFINDINGSnMean scores were high for acceptability, usefulness, and the ease of use for all four groups. Inexperienced participants (<1 year) reported higher perceived ease of use (pxa0=xa0.05) and perceived usefulness (pxa0=xa0.01) than those with experience of 1 year or more.nnnCONCLUSIONSnParticipants completed the protocol and reported that all four interfaces, including the control (HANDS), were acceptable, easy to use, and useful.nnnIMPLICATIONS FOR NURSING KNOWLEDGEnFurther study is warranted before clinical implementation within the electronic health record.
International Journal of Nursing Knowledge | 2018
Vanessa Emille Carvalho de Sousa; Marcos Venícios de Oliveira Lopes; Gail M. Keenan; Karen Dunn Lopez
PURPOSEnTo design and test educational software to improve nursing students diagnostic reasoning through NANDA-I-based clinical scenarios.nnnMETHODSnA mixed method approach was used and included content validation by a panel of 13 experts and prototype testing by a sample of 56 students.nnnFINDINGSnExperts suggestions included writing adjustments, new response options, and replacement of clinical information on the scenarios. Percentages of students correct answers were 65.7%, 62.2%, and 60.5% for related factors, defining characteristics, and nursing diagnoses, respectively.nnnCONCLUSIONnFull development of this software shows strong potential for enhancing students diagnostic reasoning.nnnIMPLICATIONS FOR NURSING PRACTICEnNew graduates may be able to apply diagnostic reasoning more rapidly by exercising their diagnostic skills within this software.
International Journal of Nursing Knowledge | 2018
Gail M. Keenan; Karen Dunn Lopez; Vanessa Emille Carvalho de Sousa; Janet Stifter; Tamara Gonçalves Rezende Macieira; Andrew D. Boyd; Yingwei Yao; T. Heather Herdman; Sue Moorhead; Anna M. McDaniel; Diana J. Wilkie
PURPOSEnTo critically evaluate 2014 American Academy of Nursing (AAN) call-to-action plan for generating interoperable nursing data.nnnDATA SOURCESnHealthcare literature.nnnDATA SYNTHESISnAANs plan will not generate the nursing data needed to participate in big data science initiatives in the short term because Logical Observation Identifiers Names and Codes and Systematized Nomenclature of Medicine - Clinical Terms are not yet ripe for generating interoperable data. Well-tested viable alternatives exist.nnnCONCLUSIONSnAuthors present recommendations for revisions to AANs plan and an evidence-based alternative to generating interoperable nursing data in the near term. These revisions can ultimately lead to the proposed terminology goals of the AANs plan in the long term.