Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Azadeh Yadollahi is active.

Publication


Featured researches published by Azadeh Yadollahi.


IEEE Transactions on Biomedical Engineering | 2006

A robust method for heart sounds localization using lung sounds entropy

Azadeh Yadollahi; Zahra Moussavi

Heart sounds are the main unavoidable interference in lung sound recording and analysis. Hence, several techniques have been developed to reduce or cancel heart sounds (HS) from lung sound records. The first step in most HS cancellation techniques is to detect the segments including HS. This paper proposes a novel method for HS localization using entropy of the lung sounds. We investigated both Shannon and Renyi entropies and the results of the method using Shannon entropy were superior. Another HS localization method based on multiresolution product of lung sounds wavelet coefficients adopted from was also implemented for comparison. The methods were tested on data from 6 healthy subjects recorded at low (7.5 ml/s/kg) and medium (15 ml/s/kg) flow rates. The error of entropy-based method using Shannon entropy was found to be 0.1 /spl plusmn/ 0.4% and 1.0 /spl plusmn/ 0.7% at low and medium flow rates, respectively, which is significantly lower than that of multiresolution product method and those of other methods reported in previous studies. The proposed method is fully automated and detects HS included segments in a completely unsupervised manner.


IEEE Transactions on Biomedical Engineering | 2006

A robust method for estimating respiratory flow using tracheal sounds entropy

Azadeh Yadollahi; Zahra Moussavi

The relationship between respiratory sounds and flow is of great interest for researchers and physicians due to its diagnostic potentials. Due to difficulties and inaccuracy of most of the flow measurement techniques, several researchers have attempted to estimate flow from respiratory sounds. However, all of the proposed methods heavily depend on the availability of different rates of flow for calibrating the model, which makes their use limited by a large degree. In this paper, a robust and novel method for estimating flow using entropy of the band pass filtered tracheal sounds is proposed. The proposed method is novel in terms of being independent of the flow rate chosen for calibration; it requires only one breath for calibration and can estimate any flow rate even out of the range of calibration flow. After removing the effects of heart sounds (which distort the low-frequency components of tracheal sounds) on the calculated entropy of the tracheal sounds, the performance of the method at different frequency ranges were investigated. Also, the performance of the proposed method was tested using 6 different segment sizes for entropy calculation and the best segment sizes during inspiration and expiration were found. The method was tested on data of 10 healthy subjects at five different flow rates. The overall estimation error was found to be 8.3 /spl plusmn/ 2.8% and 9.6 /spl plusmn/ 2.8% for inspiration and expiration phases, respectively.


international conference of the ieee engineering in medicine and biology society | 2009

Acoustic obstructive sleep apnea detection

Azadeh Yadollahi; Zahra Moussavi

Obstructive sleep apnea (OSA) is a common respiratory disorder during sleep, in which the airways are collapsed and impair the respiration. Apnea is s cessation of airflow to the lungs which lasts at least for 10s. The current gold standard method for OSA assessment is full night polysomnography (PSG); however, its high cost, inconvenience for patients and immobility have persuaded researchers to seek simple and portable devices to detect OSA. In this paper, we report on developing a new system for OSA detection and monitoring, which only requires two data channels: tracheal breathing sounds and the blood oxygen saturation level (SaO2). A fully automated method was developed that uses the energy of breathing sounds signals to segment the signals into sound and silent segments. Then, the sound segments are classified into breath, snore (if exists) and noise segments. The SaO2 signal is analyzed to find the rises and drops in the SaO2 signal. Finally, a fuzzy algorithm was developed to use this information and detect apnea and hypopnea events. The method was evaluated on the data of 40 patients simultaneously with full night PSG study, and the results were compared with those of the PSG. The results show high correlation (96%) between our system and PSG. Also, the method has been found to have sensitivity and specificity values of more than 90% in differentiating simple snorers from OSA patients.


Medical Engineering & Physics | 2010

Automatic breath and snore sounds classification from tracheal and ambient sounds recordings

Azadeh Yadollahi; Zahra Moussavi

In this study respiratory sound signals were recorded from 23 patients suspect of obstructive sleep apnea, who were referred for the full-night sleep lab study. The sounds were recorded with two microphones simultaneously: one placed over trachea and one hung in the air in the vicinity of the patient. During recording the sound signals, patients Polysomnography (PSG) data were also recorded simultaneously. An automatic method was developed to classify breath and snore sound segments based on their energy, zero crossing rate and formants of the sound signals. For every sound segment, the number of zero crossings, logarithm of the signals energy and the first formant were calculated. Fischer Linear Discriminant was implemented to transform the 3-dimensional (3D) feature set to a 1-dimensional (1D) space and the Bayesian threshold was applied on the transformed features to classify the sound segments into either snore or breath classes. Three sets of experiments were implemented to investigate the methods performance for different training and test data sets extracted from different neck positions. The overall accuracy of all experiments for tracheal recordings were found to be more than 90% in classifying breath and snore sounds segments regardless of the neck position. This implies the methods accuracy is insensitive to patients position; hence, simplifying data analysis for an entire night recording. The classification was also performed on sounds signals recorded simultaneously with an ambient microphone and the results were compared with those of the tracheal recording.


international conference of the ieee engineering in medicine and biology society | 2007

Feature selection for swallowing sounds classification

Azadeh Yadollahi; Zahra Moussavi

In recent years swallowing sounds analysis have received great attention for observing the abnormalities in swallowing mechanisms. In this paper a comprehensive set of features were extracted from time and frequency domains characteristics of the signals. Ill features were obtained from different parts of swallowing sounds including initial discrete sounds (IDS), bolus transmission sounds (BTS) and the entire swallowing sounds signal (WHL). Reducing the number of features and selecting a set of most important ones is a crucial step in sketching the signal characteristics, observing the signal variations in classification problems. Therefore, in this study features were examined thoroughly and arranged by maximizing the Mahalanobis distances between normal and dysphagic classes. The results indicate low- and high-frequency components represent the main characteristics of the signals for IDS segment of the swallowing sound, while the medium frequency components play the principal role for BTS segment. Different feature subsets with variable number of features were investigated for classifying normal and dysphagic swallowing sound signals. It was found that the overall performances of the feature subset extracted from WHL was superior to the results of the feature subsets extracted from IDS or BTS individually.


IEEE Transactions on Biomedical Engineering | 2011

The Effect of Anthropometric Variations on Acoustical Flow Estimation: Proposing a Novel Approach for Flow Estimation Without the Need for Individual Calibration

Azadeh Yadollahi; Zahra Moussavi

Tracheal sound average power is directly related to the breathing flow rate and recently it has attracted considerable attention for acoustical flow estimation. However, the flow-sound relationship is highly variable among people and it also changes for the same person at different flow rates. Hence, a robust model capable of estimating flow from tracheal sounds at different flow rates in a large group of individuals does not exist. In this paper, a model is proposed to estimate respiratory flow from tracheal sounds. The proposed model eliminates the dependence of the previous methods on calibrating the model for every individual and at different flow rates. To validate the model, it was applied to the respiratory sound and flow data of 93 healthy individuals. We investigated the statistical correlation between the model parameters and anthropometric features of the subjects. The results have shown that gender, height, and smoking are the most significant factors that affect the model parameters. Hence, we grouped nonsmoker subjects into four groups based on their gender and height. The average of model parameters in each group was defined as the group-calibrated model parameters. These models were applied to estimate flow from data of subjects within the same group and in the other groups. The results show that flow estimation error based on the group-calibrated model is less than 10%. The low estimation errors confirm the possibility of defining a general flow estimation model for subjects with similar anthropometric features with no need for calibrating the model parameters for every individual. This technique simplifies the acoustical flow estimation in general applications including sleep studies and patients screening in health care facilities.


international conference of the ieee engineering in medicine and biology society | 2009

Formant analysis of breath and snore sounds

Azadeh Yadollahi; Zahra Moussavi

Formant frequencies of snore and breath sounds represent resonance in the upper airways; hence, they change with respect to the upper airway anatomy. Therefore, for-mant frequencies and their variations can be examined to distinguish between snore and breath sounds. In this paper, formant frequencies of snore and breath sounds are investigated and automatically grouped into 7 clusters based on K-Means clustering. First, formants clusters of breath and snore sounds of all subjects were investigated together and their union were calculated as the most probable ranges of the formants. The ranges for the first four formants which span the main frequency components of breath and snore sounds were found to be [20–400]Hz, [270–840]Hz, [500–1380]Hz and [910–1920]Hz. These ranges were then used as priori information to recalculate the formants of snore and breath sounds separately. Statistical t-test showed the 1st and 3rd formants to be the most characteristic features in distinguishing the breath and snore sounds from each other.


international conference of the ieee engineering in medicine and biology society | 2006

Apnea Detection by Acoustical Means

Azadeh Yadollahi; Zahra Moussavi

In this paper a new non-invasive method for apnea detection is proposed. Eight healthy subjects participated in this study. They were instructed to breathe very shallow with different periods of breath hold to simulate sleep apnea. Following our previous study in successful use of entropy for flow estimation, in this study the Otsu threshold was used to classify the calculated entropy into two classes of breathing and apnea. The results show that the method is capable of detecting the apnea periods even when the subjects breathe at very shallow flow rates. The overall lag and duration errors between the estimated and actual apnea periods were found to be 0.207plusmn0.062 and 0.289plusmn0.258 s, respectively. The results are encouraging for the use of the proposed method as a fast, easy and promising tool for apnea detection


international symposium on signal processing and information technology | 2007

Breath Analysis of Respiratory Flow using Tracheal Sounds

Saiful Huq; Azadeh Yadollahi; Zahra Moussavi

While lung sounds intensity is significantly different during inspiratory and expiratory phases, such difference is not audible between the two respiratory phases when listening to tracheal breath sounds. In this study we investigated whether any difference exists between the average power and log-variance of the band-pass filtered tracheal breath sound between the respiratory phases. We used data from 9 healthy subjects without any pulmonary diseases at 4 different flow rates (low, medium, high and very high) and compared the two features at six different frequency ranges from 70 to 1200Hz. The most pronounced differences between the two respiratory phases were found in the 300-450Hz and 800-1000Hz for the average power and log-variance, respectively.


international conference of the ieee engineering in medicine and biology society | 2008

Comparison of flow-sound relationship for different features of tracheal sound

Azadeh Yadollahi; Zahra Moussavi

In recent years, respiratory flow estimation using tracheal sounds has received considerable attention. In this paper, four different features of tracheal sound are investigated and their relationships with flow at different target flow rates are examined during inspiration and expiration phases. The features include average power (AvgPwr), logarithm of the variance (LogV ar), logarithm of the range (LogRng) and logarithm of the envelop (LogEnv) of tracheal sound. For each feature a linear model is fitted to the flow and the feature. The results show that LogV ar is the best feature to describe flow-sound relationship with a linear model, while the slope of the linear model using AvgPwr shows the largest deviation from a line with changes in target flow rates. Also, the distance from origin of the linear model using any feature changes linearly with variations of target flow.

Collaboration


Dive into the Azadeh Yadollahi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Saiful Huq

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge