Jeffrey Fried
Santa Barbara Cottage Hospital
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey Fried.
workshop on applications of computer vision | 2016
Carlos Torres; Victor Fragoso; Scott D. Hammond; Jeffrey Fried; B. S. Manjunath
Manual analysis of body poses of bed-ridden patients requires staff to continuously track and record patient poses. Two limitations in the dissemination of pose-related therapies are scarce human resources and unreliable automated systems. This work addresses these issues by introducing a new method and a new system for robust automated classification of sleep poses in an Intensive Care Unit (ICU) environment. The new method, coupled-constrained Least-Squares (cc-LS), uses multimodal and multiview (MM) data and finds the set of modality trust values that minimizes the difference between expected and estimated labels. The new system, Eye-CU, is an affordable multi-sensor modular system for unobtrusive data collection and analysis in healthcare. Experimental results indicate that the performance of cc-LS matches the performance of existing methods in ideal scenarios. This method outperforms the latest techniques in challenging scenarios by 13% for those with poor illumination and by 70% for those with both poor illumination and occlusions. Results also show that a reduced Eye-CU configuration can classify poses without pressure information with only a slight drop in its performance.
international conference on computer vision systems | 2015
Carlos Torres; Scott D. Hammond; Jeffrey Fried; B. S. Manjunath
Clinical evidence suggests that sleep pose analysis can shed light onto patient recovery rates and responses to therapies. In this work, we introduce a formulation that combines features from multimodal data to classify human sleep poses in an Intensive Care Unit ICU environment. As opposed to the current methods that combine data from multiple sensors to generate a single feature, we extract features independently. We then use these features to estimate candidate labels and infer a pose. Our method uses modality trusts --- each modalitys classification ability --- to handle variable scene conditions and to deal with sensor malfunctions. Specifically, we exploit shape and appearance features extracted from three sensor modalities: RGB, depth, and pressure. Classification results indicate that our method achieves 100i?ź% accuracy outperforming previous techniques by 6i?ź% in bright and clear ideal scenes, 70i?ź% in poorly illuminated scenes, and 90i?ź% in occluded ones.
IEEE Transactions on Multimedia | 2018
Carlos Torres; Jeffrey Fried; Kenneth Rose; B. S. Manjunath
Clinical observations indicate that during critical care at the hospitals, a patients sleep positioning and motion have a significant effect on recovery rate. Unfortunately, there is no formal medical protocol to record, quantify, and analyze motion of patients. There are very few clinical studies that use manual analysis of sleep poses and motion recordings to support medical benefits of patient positioning and motion monitoring. Manual processes do not scale, are prone to human errors, and put strain on an already taxed healthcare workforce. This study introduces multimodal, multiview motion analysis and summarization for healthcare (MASH). MASH is an autonomous system, which addresses these issues by monitoring healthcare environments and enabling the recording and analysis of patient sleep-pose patterns. MASH uses three RGB-D cameras to monitor patients in a medical intensive care unit (ICU) room. The proposed algorithms estimate pose direction at different temporal resolutions and use keyframes to efficiently represent pose transition dynamics. MASH combines deep features computed from the data with a modified version of hidden Markov model (HMM) to flexibly model pose duration and summarize patient motion. The performance is evaluated in ideal (BC: bright and clear/occlusion-free) and natural (DO: dark and occluded) scenarios at two motion resolutions and in two environments: a mock-up and a medical ICU. The usage of deep features is evaluated and their performance compared with engineered features. Experimental results using deep features in DO scenes increase performance from
EBioMedicine | 2018
Lucien Barnes; Douglas M. Heithoff; Scott P. Mahan; Gary N. Fox; Andrea Zambrano; Jane Choe; Lynn N. Fitzgibbons; Jamey D. Marth; Jeffrey Fried; H. Tom Soh; Michael J. Mahan
\text{86.7}\%
international conference on distributed smart cameras | 2017
Carlos Torres; Archith J. Bency; Jeffrey Fried; B. S. Manjunath
to
european conference on computer vision | 2016
Carlos Torres; Jeffrey Fried; Kenneth Rose; B. S. Manjunath
\text{93.6}\%
Chest | 2011
Jeffrey Fried; Maggie Cote; Emily Atkins; Denise McDonald; Paula Gallucci; Alexa Calfee; Jonathan Grotts; Nathan Sigler
, while matching the classification performance of engineered features in BC scenes. The performance of MASH is compared with HMM and C3D. The overall overtime tracing and summarization error rate across all methods increased when transitioning from the mock-up to the the medical ICU data. The proposed keyframe estimation helps achieve a
Chest | 2006
Jeffrey Fried; Priti Gagneja; Muhammad S. Haq
\text{78}\%
IEEE Journal of Translational Engineering in Health and Medicine | 2018
Carlos Torres; Jeffrey Fried; B. S. Manjunath
transition classification accuracy.
Chest | 2018
Yuri Matusov; Natalie Achamallah; Jeffrey Fried
BACKGROUND There is an urgent need for rapid, sensitive, and affordable diagnostics for microbial infections at the point-of-care. Although a number of innovative systems have been reported that transform mobile phones into potential diagnostic tools, the translational challenge to clinical diagnostics remains a significant hurdle to overcome. METHODS A smartphone-based real-time loop-mediated isothermal amplification (smaRT-LAMP) system was developed for pathogen ID in urinary sepsis patients. The free, custom-built mobile phone app allows the phone to serve as a stand-alone device for quantitative diagnostics, allowing the determination of genome copy-number of bacterial pathogens in real time. FINDINGS A head-to-head comparative bacterial analysis of urine from sepsis patients revealed that the performance of smaRT-LAMP matched that of clinical diagnostics at the admitting hospital in a fraction of the time (~1 h vs. 18-28 h). Among patients with bacteremic complications of their urinary sepsis, pathogen ID from the urine matched that from the blood - potentially allowing pathogen diagnosis shortly after hospital admission. Additionally, smaRT-LAMP did not exhibit false positives in sepsis patients with clinically negative urine cultures. INTERPRETATION The smaRT-LAMP system is effective against diverse Gram-negative and -positive pathogens and biological specimens, costs less than