Eliran Dafna
Ben-Gurion University of the Negev
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eliran Dafna.
PLOS ONE | 2013
Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Objective Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology. Design Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events. Patients Sixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects. Measurements and Results To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise). Conclusions Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.
PLOS ONE | 2015
Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Study Objectives To develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA). Design Whole-night breathing sounds (using ambient microphone) and polysomnography (PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohens kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity. Setting University affiliated sleep-wake disorder center and biomedical signal processing laboratory. Patients One hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects. Measurements and Results Epoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohens kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively. Conclusions This study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances.
international conference of the ieee engineering in medicine and biology society | 2012
Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
In this work, a novel system (method) for sleep quality analysis is proposed. Its purpose is to assist an alternative non-contact method for detecting and diagnosing sleep related disorders based on acoustic signal processing. In this work, audio signals of 145 patients with obstructive sleep apnea were recorded (more than 1000 hours) in a sleep laboratory and analyzed. The method is based on the assumption that during sleep the respiratory efforts are more periodically patterned and consistent relative to a waking state; furthermore, the sound intensity of those efforts is higher, making the pattern more noticeable relative to the background noise level. The system was trained on 50 subjects and validated on 95 subjects. The system accuracy for detecting sleep/wake state is 82.1% (epoch by epoch), resulting in 3.9% error (difference) in detecting sleep latency, 11.4% error in estimating total sleep time, and 11.4% error in estimating sleep efficiency.
international conference of the ieee engineering in medicine and biology society | 2013
Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
In this paper, an audio-based system for severity estimation of obstructive sleep apnea (OSA) is proposed. The system estimates the apnea-hypopnea index (AHI), which is the average number of apneic events per hour of sleep. This system is based on a Gaussian mixture regression algorithm that was trained and validated on full-night audio recordings. Feature selection process using a genetic algorithm was applied to select the best features extracted from time and spectra domains. A total of 155 subjects, referred to in-laboratory polysomnography (PSG) study, were recruited. Using the PSGs AHI score as a gold-standard, the performances of the proposed system were evaluated using a Pearson correlation, AHI error, and diagnostic agreement methods. Correlation of R=0.89, AHI error of 7.35 events/hr, and diagnostic agreement of 77.3% were achieved, showing encouraging performances and a reliable non-contact alternative method for OSA severity estimation.
Journal of clinical sleep medicine : JCSM : official publication of the American Academy of Sleep Medicine | 2016
Asaf Levartovsky; Eliran Dafna; Yaniv Zigel; Ariel Tarasiuk
STUDY OBJECTIVES Sound level meter is the gold standard approach for snoring evaluation. Using this approach, it was established that snoring intensity (in dB) is higher for men and is associated with increased apnea-hypopnea index (AHI). In this study, we performed a systematic analysis of breathing and snoring sound characteristics using an algorithm designed to detect and analyze breathing and snoring sounds. The effect of sex, sleep stages, and AHI on snoring characteristics was explored. METHODS We consecutively recruited 121 subjects referred for diagnosis of obstructive sleep apnea. A whole night audio signal was recorded using noncontact ambient microphone during polysomnography. A large number (> 290,000) of breathing and snoring (> 50 dB) events were analyzed. Breathing sound events were detected using a signal-processing algorithm that discriminates between breathing and nonbreathing (noise events) sounds. RESULTS Snoring index (events/h, SI) was 23% higher for men (p = 0.04), and in both sexes SI gradually declined by 50% across sleep time (p < 0.01) independent of AHI. SI was higher in slow wave sleep (p < 0.03) compared to S2 and rapid eye movement sleep; men have higher SI in all sleep stages than women (p < 0.05). Snoring intensity was similar in both genders in all sleep stages and independent of AHI. For both sexes, no correlation was found between AHI and snoring intensity (r = 0.1, p = 0.291). CONCLUSIONS This audio analysis approach enables systematic detection and analysis of breathing and snoring sounds from a full night recording. Snoring intensity is similar in both sexes and was not affected by AHI.
international conference of the ieee engineering in medicine and biology society | 2016
Eliran Dafna; M. Halevi; D. Ben Or; Ariel Tarasiuk; Yaniv Zigel
During routine sleep diagnostic procedure, sleep is broadly divided into three states: rapid eye movement (REM), non-REM (NREM) states, and wake, frequently named macro-sleep stages (MSS). In this study, we present a pioneering attempt for MSS detection using full night audio analysis. Our working hypothesis is that there might be differences in sound properties within each MSS due to breathing efforts (or snores) and body movements in bed. In this study, audio signals of 35 patients referred to a sleep laboratory were recorded and analyzed. An additional 178 subjects were used to train a probabilistic time-series model for MSS staging across the night. The audio-based system was validated on 20 out of the 35 subjects. System accuracy for estimating (detecting) epoch-by-epoch wake/REM/NREM states for a given subject is 74% (69% for wake, 54% for REM, and 79% NREM). Mean error (absolute difference) was 36±34 min for detecting total sleep time, 17±21 min for sleep latency, 5±5% for sleep efficiency, and 7±5% for REM percentage. These encouraging results indicate that audio-based analysis can provide a simple and comfortable alternative method for ambulatory evaluation of sleep and its disorders.During routine sleep diagnostic procedure, sleep is broadly divided into three states: rapid eye movement (REM), non-REM (NREM) states, and wake, frequently named macro-sleep stages (MSS). In this study, we present a pioneering attempt for MSS detection using full night audio analysis. Our working hypothesis is that there might be differences in sound properties within each MSS due to breathing efforts (or snores) and body movements in bed. In this study, audio signals of 35 patients referred to a sleep laboratory were recorded and analyzed. An additional 178 subjects were used to train a probabilistic time-series model for MSS staging across the night. The audio-based system was validated on 20 out of the 35 subjects. System accuracy for estimating (detecting) epoch-by-epoch wake/REM/NREM states for a given subject is 74% (69% for wake, 54% for REM, and 79% NREM). Mean error (absolute difference) was 36±34 min for detecting total sleep time, 17±21 min for sleep latency, 5±5% for sleep efficiency, and 7±5% for REM percentage. These encouraging results indicate that audio-based analysis can provide a simple and comfortable alternative method for ambulatory evaluation of sleep and its disorders.
international conference of the ieee engineering in medicine and biology society | 2015
T. Rosenwein; Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Obstructive sleep apnea (OSA) is a prevalent sleep disorder, characterized by recurrent episodes of upper airway obstructions during sleep. We hypothesize that breath-by-breath audio analysis of the respiratory cycle (i.e., inspiration and expiration phases) during sleep can reliably estimate the apnea hypopnea index (AHI), a measure of OSA severity. The AHI is calculated as the average number of apnea (A)/hypopnea (H) events per hour of sleep. Audio signals recordings of 186 adults referred to OSA diagnosis were acquired in-laboratory and at-home conditions during polysomnography and WatchPat study, respectively. A/H events were automatically segmented and classified using a binary random forest classifier. Total accuracy rate of 86.3% and an agreement of κ=42.98% were achieved in A/H event detection. Correlation of r=0.87 (r=0.74), diagnostic agreement of 76% (81.7%), and average absolute difference AHI error of 7.4 (7.8) (events/hour) were achieved in in-laboratory (at-home) conditions, respectively. Here we provide evidence that A/H events can be reliably detected at their exact time locations during sleep using non-contact audio approach. This study highlights the potential of this approach to reliably evaluate AHI in at home conditions.
international conference of the ieee engineering in medicine and biology society | 2014
T. Rosenwein; Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Evaluation of respiratory activity during sleep is essential in order to reliably diagnose sleep disorder breathing (SDB); a condition associated with serious cardio-vascular morbidity and mortality. In the current study, we developed and validated a robust automatic breathing-sounds (i.e. inspiratory and expiratory sounds) detection system of audio signals acquired during sleep. Random forest classifier was trained and tested using inspiratory/expiratory/noise events (episodes), acquired from 84 subjects consecutively and prospectively referred to SDB diagnosis in sleep laboratory and in at-home environment. More than 560,000 events were analyzed, including a variety of recording devices and different environments. The systems overall accuracy rate is 88.8%, with accuracy rate of 91.2% and 83.6% in in-laboratory and at-home environments respectively, when classifying between inspiratory, expiratory, and noise classes. Here, we provide evidence that breathing-sounds can be reliably detected using non-contact audio technology in at-home environment. The proposed approach may improve our understanding of respiratory activity during sleep. This in return, will improve early SDB diagnosis and treatment.
international conference of the ieee engineering in medicine and biology society | 2016
D. Ben Or; Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder. Previous studies associated OSA with anatomical abnormalities of the upper respiratory tract that may be reflected in the acoustic characteristics of speech. We tested the hypothesis that the speech signal carries essential information that can assist in early assessment of OSA severity by estimating apnea-hypopnea index (AHI). 198 men referred to routine polysomnography (PSG) were recorded shortly prior to sleep onset while reading a one-minute speech protocol. The different parts of the speech recordings, i.e., sustained vowels, short-time frames of fluent speech, and the speech recording as a whole, underwent separate analyses, using sustained vowels features, short-term features, and long-term features, respectively. Applying support vector regression and regression trees, these features were used in order to estimate AHI. The fusion of the outputs of the three subsystems resulted in a diagnostic agreement of 67.3% between the speech-estimated AHI and the PSG-determined AHI, and an absolute error rate of 10.8 events/hr. Speech signal analysis may assist in the estimation of AHI, thus allowing the development of a noninvasive tool for OSA screening.
international conference of the ieee engineering in medicine and biology society | 2016
Matan Halevi; Eliran Dafna; Ariel Tarasiuk; Yaniv Zigel
Obstructive sleep apnea (OSA) affects up to 14% of the population. OSA is characterized by recurrent apneas and hypopneas during sleep. The apnea-hypopnea index (AHI) is frequently used as a measure of OSA severity. In the current study, we explored the acoustic characteristics of hypopnea in order to distinguish it from apnea. We hypothesize that we can find audio-based features that can discriminate between apnea, hypopnea and normal breathing events. Whole night audio recordings were performed using a non-contact microphone on 44 subjects, simultaneously with the polysomnography study (PSG). Recordings were segmented into 2015 apnea, hypopnea, and normal breath events and were divided to design and validation groups. A classification system was built using a 3-class cubic-kernelled support vector machine (SVM) classifier. Its input is a 36-dimensional audio-based feature vector that was extracted from each event. Three-class accuracy rate using the hold-out method was 84.7%. A two-class model to separate apneic events (apneas and hypopneas) from normal breath exhibited accuracy rate of 94.7%. Here we show that it is possible to detect apneas or hypopneas from whole night audio signals. This might provide more insight about a patients level of upper airway obstruction during sleep. This approach may be used for OSA severity screening and AHI estimation.