Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas J. Napoli is active.

Publication


Featured researches published by Nicholas J. Napoli.


ieee embs international conference on biomedical and health informatics | 2017

Addressing bias from non-random missing attributes in health data

Nicholas J. Napoli; Madeline E. Kotoriy; William Barnhardt; Jeffrey S. Young; Laura E. Barnes

This paper aims to improve health outcomes research and data management practices. Typically health care records are very large and cumbersome to manage, and the quality of the data is often overlooked because the volume is thought to be large enough to overcome issues arising from missing data. However, simply removing observations with missing data is problematic because the distribution of missing information is non-random, thus the sample used for analysis becomes biased. We propose a method for evaluating and addressing bias in the data cleaning process. Specifically, we identify where bias exists within data and address the bias using sub-sampling or discarding data. We present a case study analyzing data from a level 1 trauma center to establish how bias in health registries exists and how this bias can have downstream implications for evaluating hospital performance. Our method utilizes a two-tailed z-test to compare subgroups in the data set, which demonstrates how missing data in these subgroups can lead to bias. We demonstrate how to localize the bias in particular subgroups and provide corrective actions to handle the bias. We also exhibit how failure to account for bias can distort performance, illustrating the importance of the proposed method.


IISE Transactions on Healthcare Systems Engineering | 2017

Relative mortality analysis: A new tool to evaluate clinical performance in trauma centers

Nicholas J. Napoli; William Barnhardt; Madeline E. Kotoriy; Jeffrey S. Young; Laura E. Barnes

ABSTRACT Improving trauma performance relies on outcome measures to target groups of patients with suboptimal outcomes. However, this is difficult when examining trauma data sets dominated by patients with high probability of survival (POS). The W-Score, a standard metric for evaluating trauma performance, disproportionately weights these populations and inaccurately represents effectiveness across the entire patient spectrum. We introduce the Relative Mortality Performance Trend (RMPT) and the Relative Mortality Metric (RMM), which provide valuable insight into trauma center performance at all levels of acuity and establish a more reliable metric for evaluating performance. We validate this method using data from a Level 1 trauma center over a 20-year period, where 89.39% of the patient population has a POS > .90. The RMPT groups patient populations by acuity levels, which allows us to identify changes in performance and isolate areas for improvement. We significantly outperformed the anticipated mortality with 95% confidence intervals across all POS ranges, except for the (0.799–0.901) and (0.967–0.970) ranges, which are targeted for improvement. The most significant improvements occurred for patients with POS < 0.569 between the 1994–1999 and 2003–2008 cohorts. Monte Carlo Simulations demonstrated that the RMM is consistently a more accurate metric than the W-Score when utilizing low sample sizes.


international conference on big data and smart computing | 2016

A MapReduce framework to improve template matching uncertainty

Nicholas J. Napoli; Kevin Leach; Laura E. Barnes; Westley Weimer

Normalized cross-correlation template matching is used as a detection method in many scientific domains. To be practical, template matching must scale to large datasets while handling ambiguity, uncertainty, and noisy data. We propose a novel approach based on Dempster-Shafer (DS) Theory and MapReduce parallelism. DS Theory addresses conflicts between data sources, noisy data, and uncertainty, but is traditionally serial. However, we use the commutative and associative nature of Dempsters Combination Rule to perform a parallel computation of DS masses and a logarithmic hierarchical fusion of these DS masses. This parallelism is particularly important because additional data sources allow DS-based template matching to maintain accuracy and refine uncertainty in the face of noisy data. We validate the parallelism, accuracy, and uncertainty of our implementation as a function of the size and noise of the input dataset, finding that it scales linearly and can retain accuracy and improve uncertainty in the face of noise for large datasets.


Human Movement Science | 2015

An EMG comparative analysis of quadriceps during isoinertial strength training using nonlinear scaled wavelets

Nicholas J. Napoli; Anthony R. Mixco; Jorge Bohorquez; Joseph F. Signorile

High-speed resistance training is used to increase power; however, momentum can reduce the effectiveness of high-speed (HS) training when using weight-stack (WS) machines. This study used a non-linear scaled wavelet analysis to assess differences between pneumatic (P) and WS during seven HS or controlled speed (CS) repetitions. Vastus medialis (VM) and lateralis (VL), and rectus femoris (RF) EMG data were collected during leg extension exercises performed by five regular weight-trainers (mean age ± SD, 23.2 ± 2.9 years). Data were analyzed using continuous wavelet analysis to assess temporal Intensity distribution across eight frequency bands. Significant differences occurred due to speed for all muscles (p<.0001). P produced higher Intensity than WS for all muscles during HS (p<.0001), and VM and RF during CS (p<.001). The CON phase produced higher Intensity than ECC for the vasti muscles during CS (p<.0003), and VM and RF during HS (p<.0001). Intensity increased across repetitions plateauing earlier for the vasti than RF during CS. Regardless of the machine, Intensity levels peaked between the 25-53 Hz and 46-82 Hz (2nd and 3rd wavelets) bands. The results indicate that when the objective is increasing power through isoinertial training, P machines at HS appear to be the most effective alternative.


systems and information engineering design symposium | 2014

Analysis of communication patterns in critical care environments

Lan-Vy Ngo; Katherine Walker; Anna Holowinsky; Grace Knox; Robert Shimberg; Nicholas J. Napoli; Jacob R. Gillen; Jeffrey S. Young; Laura E. Barnes

Efficient and effective communication processes are critical for patient health and safety. Communication processes for patient admission within the UVa Surgical and Trauma Intensive Care Unit (SICU) vary depending on situational factors. Trauma personnel in the SICU are also involved in patient admission to the Operating Room (OR). In order to better understand communication patterns and identify inefficiencies and variation, an analysis of the patient admission processes from the Emergency Department (ED) to the OR and ED to SICU is performed. To identify potential inefficiencies within patient admission, key sub-processes with high variability were identified. Process models were constructed to depict the current practices of each step and the associated communication patterns between stakeholders. Interviews, surveys, and focus groups were also conducted with trauma personnel to gauge opinions on the patient admission processes and to collect estimated waiting times at each step in the processes. Based on findings from interviews, process completion times were compiled and analyzed using Hurwicz Alpha Criterion. Results of the study indicate that communication processes with consults and the bed center, both located outside of the SICU, are variable and inconsistent. This variability in communication is a result of a lack of situational awareness and procedures external to the SICU.


Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care | 2014

Analysis and Evaluation of Clinical Communication in the Surgical Intensive Care Unit

Nicholas J. Napoli; William F. Barnhardt; J. A. Young; Laura E. Barnes

For decades, emergency departments have become increasingly unable to meet escalating patient demands. In order to allow emergency departments to operate beyond their designed capacity, major efforts have gone into improving their work-flow, efficiency, and quality of care. Many studies have show that inefficiencies and error in a clinical setting are linked to communications. As a means to improve the quality of care for trauma patients, this study examines the present communication systems for different admission pathways to the University of Virginia’s Surgical Intensive Care Unit (SICU) and recommends a new design for critical operations. The evaluation and analysis of this new design will identify critical factors, including potential communication modalities, implementation details, potential key clinician roles, and in which processes the new system will be useful. Our objective is to design a real-time communication infrastructure for operations within a critical care unit by an informed design process.


Prehospital Emergency Care | 2018

Relative Mortality Analysis Of The “Golden Hour”: A Comprehensive Acuity Stratification Approach To Address Disagreement In Current Literature

Philip H. Schroeder; Nicholas J. Napoli; William F. Barnhardt; Laura E. Barnes; Jeffrey S. Young

Abstract Objective: This study sought to address the disagreement in literature regarding the “golden hour” in trauma by using the Relative Mortality Analysis to overcome previous studies’ limitations in accounting for acuity when evaluating the impact of prehospital time on mortality. Methods: The previous studies that failed to support the “golden hour” suffered from limitations in their efforts to account for the confounding effects of patient acuity on the relationship between prehospital time and mortality in their trauma populations. The Relative Mortality Analysis was designed to directly address these limitations using a novel acuity stratification approach, based on patients’ probability of survival (PoS), a comprehensive triage metric calculated using Trauma and Injury Severity Score methodology. For this analysis, the population selection and analysis methods of these previous studies were compared to the Relative Mortality Analysis on how they capture the relationship between prehospital time and mortality in the University of Virginia (UVA) Trauma Center population. Results: The methods of the previous studies that failed to support the “golden hour” also failed to do so when applied to the UVA Trauma Center population. However, when applied to the same population, the Relative Mortality Analysis identified a subgroup, 9.9% (with a PoS 23%–91%), of the 5,063 patient population with significantly lower mortality when transported to the hospital within 1 hour, supporting the “golden hour.” Conclusion: These results suggest that previous studies failed to support the “golden hour” not due to a lack of patients significantly impacted by prehospital time within their trauma populations, but instead due to limitations in their efforts to account for patient acuity. As a result, these studies inappropriately rejected the “golden hour,” leading to the current disagreement in literature regarding the relationship between prehospital time and trauma patient mortality. The Relative Mortality Analysis was shown to overcome the limitations of these studies and demonstrated that the “golden hour” was significant for patients who were not low acuity (PoS >91%) or severely high acuity (PoS <23%).


Computers in Biology and Medicine | 2018

Uncertainty in Heart Rate Complexity Metrics caused by R-peak Perturbations

Nicholas J. Napoli; Matthew W. Demas; Sanjana Mendu; Chad L. Stephens; Kellie D. Kennedy; Angela R. Harrivel; Randall E. Bailey; Laura E. Barnes

Heart rate complexity (HRC) is a proven metric for gaining insight into human stress and physiological deterioration. To calculate HRC, the detection of the exact instance of when the heart beats, the R-peak, is necessary. Electrocardiogram (ECG) signals can often be corrupted by environmental noise (e.g., from electromagnetic interference, movement artifacts), which can potentially alter the HRC measurement, producing erroneous inputs which feed into decision support models. Current literature has only investigated how HRC is affected by noise when R-peak detection errors occur (false positives and false negatives). However, the numerical methods used to calculate HRC are also sensitive to the specific location of the fiducial point of the R-peak. This raises many questions regarding how this fiducial point is altered by noise, the resulting impact on the measured HRC, and how we can account for noisy HRC measures as inputs into our decision models. This work uses Monte Carlo simulations to systematically add white and pink noise at different permutations of signal-to-noise ratios (SNRs), time segments, sampling rates, and HRC measurements to characterize the influence of noise on the HRC measure by altering the fiducial point of the R-peak. Using the generated information from these simulations provides improved decision processes for system design which address key concerns such as permutation entropy being a more precise, reliable, less biased, and more sensitive measurement for HRC than sample and approximate entropy.Heart rate complexity (HRC) is a proven metric for gaining insight into human stress and physiological deterioration. To calculate HRC, the detection of the exact instance of when the heart beats, the R-peak, is necessary. Electrocardiogram (ECG) signals can often be corrupted by environmental noise (e.g., from electromagnetic interference, movement artifacts), which can potentially alter the HRC measurement, producing erroneous inputs which feed into complex decision models. Current literature has only investigated how HRC is affected by noise when R-peak detection errors occur (false positives and false negatives). However, the numerical methods used to calculate HRC are also sensitive to the specific location of the fiducial point of the R-peak. This raises many questions regarding how this fiducial point is altered by noise, the resulting impact on the measured HRC, and how we can account for noisy HRC measures as inputs into our decision models. This work uses Monte Carlo simulations to systematically add white and pink noise at different permutations of signal-to-noise ratios (SNRs), time segments and HRC measurements to characteristize the influence of noise on the HRC measure by altering the fiducial point of the Rpeak. Using the generated information from these simulations provides improved decision processes for system design which address key concerns such as permutation entropy being a more precise, reliable, less biased, and more sensitive measurement for HRC than sample and approximate entropy.


systems and information engineering design symposium | 2015

A design framework for evaluating changes in clinical communication systems

Arvind Harinder; Luke Yesbeck; Jonathan Farag; Kevin Mosquera; Nicholas J. Napoli; William F. Barnhardt; Jeffrey S. Young; Jose Valdez; Laura E. Barnes

Poor communication in critical care settings has the potential to adversely impact patient outcomes and clinical team performance. Hence, improvements to clinical communication systems are continually sought. However, evaluating these modifications and assessing the risk of degraded performance are complex issues, which are typically addressed qualitatively. These approaches typically examine individual perceptions and can fail to capture the actual performance of the modified processes in the communication system. In order to avoid these issues and effectively analyze the changes in a given system, a Monte Carlo simulation methodology is proposed using data extracted from Electronic Medical Records (EMR) compiled by the Huron Consulting Group to precisely measure modified system performance. We couple a risk analysis, using control charts, with a temporal distribution analysis to determine whether to halt using a new system due to failures or determine if more observations are needed to make an adequate assessment. Specifically, we use EMR data to apply this methodology and show the tradeoff between sample size and the likelihood that the data converges to the current state of communication. This methodology will enable researchers to expedite the assessment of the impact of a technology, which is critical in a complex system such as the hospital.


AIAA Information Systems-AIAA Infotech @ Aerospace | 2017

Prediction of Cognitive States During Flight Simulation Using Multimodal Psychophysiological Sensing

Angela R. Harrivel; Chad L. Stephens; Robert J. Milletich; Christina M. Heinich; Nicholas J. Napoli; Nijo Abraham; Lawrence J. Prinzel; Mark A. Motter; Alan T. Pope

Collaboration


Dive into the Nicholas J. Napoli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Barnhardt

University of Virginia Health System

View shared research outputs
Top Co-Authors

Avatar

Alan T. Pope

Langley Research Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge