Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Castelli is active.

Publication


Featured researches published by Eric Castelli.


international conference of the ieee engineering in medicine and biology society | 2006

Information extraction from sound for medical telemonitoring

Dan Istrate; Eric Castelli; Michel Vacher; Laurent Besacier; Jean-François Serignat

Today, the growth of the aging population in Europe needs an increasing number of health care professionals and facilities for aged persons. Medical telemonitoring at home (and, more generally, telemedicine) improves the patients comfort and reduces hospitalization costs. Using sound surveillance as an alternative solution to video telemonitoring, this paper deals with the detection and classification of alarming sounds in a noisy environment. The proposed sound analysis system can detect distress or everyday sounds everywhere in the monitored apartment, and is connected to classical medical telemonitoring sensors through a data fusion process. The sound analysis system is divided in two stages: sound detection and classification. The first analysis stage (sound detection) must extract significant sounds from a continuous signal flow. A new detection algorithm based on discrete wavelet transform is proposed in this paper, which leads to accurate results when applied to nonstationary signals (such as impulsive sounds). The algorithm presented in this paper was evaluated in a noisy environment and is favorably compared to the state of the art algorithms in the field. The second stage of the system is sound classification, which uses a statistical approach to identify unknown sounds. A statistical study was done to find out the most discriminant acoustical parameters in the input of the classification module. New wavelet based parameters, better adapted to noise, are proposed in this paper. The telemonitoring system validation is presented through various real and simulated test sets. The global sound based system leads to a 3% missed alarm rate and could be fused with other medical sensors to improve performance


Journal of the Acoustical Society of America | 1996

Some acoustic features of nasal and nasalized vowels: A target for vowel nasalization

Gang Feng; Eric Castelli

In order to characterize acoustic properties of nasal and nasalized vowels, these sounds will be considered as a dynamic trend from an oral configuration toward an [n]-like configuration. The latter can be viewed as a target for vowel nasalization. This target corresponds to the pharyngonasal tract and it can be modeled, with some simplifications, by a single tract without any parallel paths. Thus the first two resonance frequencies (at about 300 and 1000 Hz) characterize this target well. A series of measurements has been carried out in order to describe the acoustic characteristics of the target. Measured transfer functions confirm the resonator nature of the low-frequency peak. The introduction of such a target allows the conception of the nasal vowels as a trend beginning with a simple configuration, which is terminated in the same manner, so allowing the complex nasal phenomena to be bounded. A complete study of pole-zero evolutions for the nasalization of the 11 French vowels is presented. It allows the proposition of a common strategy for the nasalization of all vowels, so a true nasal vowel can be placed in this nasalization frame. The measured transfer functions for several French nasal vowels are also given.


international symposium on biomedical imaging | 2008

Estimation of respiratory waveform using an accelerometer

P.D. Hung; S. Bonnet; R. Guillemaud; Eric Castelli; P. T. N. Yen

The cardiorespiratory signal is a fundamental vital sign to assess a persons health. Additionally, the cardio-respiratory signal gives a great deal of information to healthcare providers wishing to monitor healthy individuals. This paper proposes a method to detect the respiratory waveform from an accelerometer strapped onto the chest. A system was designed and several experiments were conducted on volunteers. The acquisition is performed in different status: normal, apnea, deep breathing and also in different postures: vertical (sitting, standing) or horizontal (lying down). This method could therefore be suitable for automatic identification of some respiratory malfunction, for example during the obstructive apnea.


international conference of the ieee engineering in medicine and biology society | 2008

Estimation of respiratory waveform and heart rate using an accelerometer

D. H. Phan; S. Bonnet; R. Guillemaud; Eric Castelli; N. Y. Pham Thi

In this paper the use of an accelerometer to measure cardio-respiratory activity is presented. Movement of the chest was recorded by an accelerometer attached to a belt around the chest. The acquisition is realized in different status: normal, apnea, deep breathing or after exhaustion and also in different postures: vertical (sitting, standing) or horizontal (lying down). The resulting signal was compared with reference measurements. The results of experimental evaluation indicate that using a chest-accelerometer can correctly detect the respiratory waveform and heart rate (HR) signal. This method is therefore suitable for automatic identification some disease, for example arrhythmia or sleep apnea.


multimedia signal processing | 2001

The effect of speech and audio compression on speech recognition performance

Laurent Besacier; Carole Bergamini; Dominique Vaufreydaz; Eric Castelli

This paper proposes an in-depth look at the influence of different speech and audio codecs on the performance of our continuous speech recognition engine. GSM full rate, G711, G723.1 and MPEG coders are investigated. It is shown that MPEG transcoding degrades the speech recognition performance for low bitrates whereas performance remains acceptable for specialized speech coders like GSM or G711. A new strategy is proposed to cope with degradation due to low bitrate coding. The acoustic models of the speech recognition system are trained with transcoded speech (one acoustic model for each speech/audio codec). First results show that this strategy allows one to recover acceptable performance.


Sensors | 2017

The Smartphone-Based Offline Indoor Location Competition at IPIN 2016: Analysis and Future Work

Joaquín Torres-Sospedra; Antonio Jiménez; Stefan Knauth; Adriano Moreira; Yair Beer; Toni Fetzer; Viet-Cuong Ta; Raúl Montoliu; Fernando Seco; Germán M. Mendoza-Silva; Oscar Belmonte; Athanasios Koukofikis; Maria João Nicolau; António Costa; Filipe Meneses; Frank Ebner; Frank Deinzer; Dominique Vaufreydaz; Trung-Kien Dao; Eric Castelli

This paper presents the analysis and discussion of the off-site localization competition track, which took place during the Seventh International Conference on Indoor Positioning and Indoor Navigation (IPIN 2016). Five international teams proposed different strategies for smartphone-based indoor positioning using the same reference data. The competitors were provided with several smartphone-collected signal datasets, some of which were used for training (known trajectories), and others for evaluating (unknown trajectories). The competition permits a coherent evaluation method of the competitors’ estimations, where inside information to fine-tune their systems is not offered, and thus provides, in our opinion, a good starting point to introduce a fair comparison between the smartphone-based systems found in the literature. The methodology, experience, feedback from competitors and future working lines are described.


international symposium on computers and communications | 2004

Recognizing emotions for the audio-visual document indexing

Xuan Hung Le; G. Quenot; Eric Castelli

In this paper, we proposed using MFCC coefficients (mel-scaled cepstral coefficients) and a simple but efficient classifying method: vector quantification (VQ) to perform speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. Other models, GMM and HMM (discrete and continuous hidden Markov model), are studied as well in the hope that the use of continuous distribution and the temporal evolution of this set of features will improve the quality of emotion recognition. The accuracy recognizing five different emotions exceeds 80% by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluations by listening and judging without returning permission nor comparisons between sentences (Inger Samso Engberg and Anya Varnich Hansen, 2001).


The Scientific World Journal | 2014

User Localization in Complex Environments by Multimodal Combination of GPS, WiFi, RFID, and Pedometer Technologies

Trung-Kien Dao; Hung-Long Nguyen; Thanh-Thuy Pham; Eric Castelli; Viet-Tung Nguyen; Dinh-Van Nguyen

Many user localization technologies and methods have been proposed for either indoor or outdoor environments. However, each technology has its own drawbacks. Recently, many researches and designs have been proposed to build a combination of multiple localization technologies system which can provide higher precision results and solve the limitation in each localization technology alone. In this paper, a conceptual design of a general localization platform using combination of multiple localization technologies is introduced. The combination is realized by dividing spaces into grid points. To demonstrate this platform, a system with GPS, RFID, WiFi, and pedometer technologies is established. Experiment results show that the accuracy and availability are improved in comparison with each technology individually.


international conference on communications | 2008

Tone recognition of Vietnamese continuous speech using hidden Markov model

Hong Quang Nguyen; Pascal Nocera; Eric Castelli; T. Van Loan

This paper presents our study on context independent tone recognition of Vietnamese continuous speech. Each of the six Vietnamese tones is represented by a hidden Markov model (HMM for short) and we used VNSPEECHCORPUS to learn these models in terms of fundamental frequency, F0, and short-time energy. We focus on evaluating the influence of different factors on the tone recognition. The experimental results show that the best method to learn F0 and energy is to use a logarithmic transformation function and then normalization with mean and mean deviation. In addition, we show that using 8 forms of tones and the discrimination between male and female speakers increase the accuracy of the Vietnamese tone recognition system.


workshop on statistical machine translation | 2009

Mining a Comparable Text Corpus for a Vietnamese-French Statistical Machine Translation System

Thi Ngoc Diep Do; Viet Bac Le; Brigitte Bigi; Laurent Besacier; Eric Castelli

This paper presents our first attempt at constructing a Vietnamese-French statistical machine translation system. Since Vietnamese is an under-resourced language, we concentrate on building a large Vietnamese-French parallel corpus. A document alignment method based on publication date, special words and sentence alignment result is proposed. The paper also presents an application of the obtained parallel corpus to the construction of a Vietnamese-French statistical machine translation system, where the use of different units for Vietnamese (syllables, words, or their combinations) is discussed.

Collaboration


Dive into the Eric Castelli's collaboration.

Top Co-Authors

Avatar

Laurent Besacier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Dan Istrate

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Jean-François Serignat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Trung-Kien Dao

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Brigitte Bigi

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Michel Vacher

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Ngoc Yen Pham

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Do Dat Tran

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Van Loan Trinh

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge