Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-François Serignat is active.

Publication


Featured researches published by Jean-François Serignat.


international conference of the ieee engineering in medicine and biology society | 2006

Information extraction from sound for medical telemonitoring

Dan Istrate; Eric Castelli; Michel Vacher; Laurent Besacier; Jean-François Serignat

Today, the growth of the aging population in Europe needs an increasing number of health care professionals and facilities for aged persons. Medical telemonitoring at home (and, more generally, telemedicine) improves the patients comfort and reduces hospitalization costs. Using sound surveillance as an alternative solution to video telemonitoring, this paper deals with the detection and classification of alarming sounds in a noisy environment. The proposed sound analysis system can detect distress or everyday sounds everywhere in the monitored apartment, and is connected to classical medical telemonitoring sensors through a data fusion process. The sound analysis system is divided in two stages: sound detection and classification. The first analysis stage (sound detection) must extract significant sounds from a continuous signal flow. A new detection algorithm based on discrete wavelet transform is proposed in this paper, which leads to accurate results when applied to nonstationary signals (such as impulsive sounds). The algorithm presented in this paper was evaluated in a noisy environment and is favorably compared to the state of the art algorithms in the field. The second stage of the system is sound classification, which uses a statistical approach to identify unknown sounds. A statistical study was done to find out the most discriminant acoustical parameters in the input of the classification module. New wavelet based parameters, better adapted to noise, are proposed in this paper. The telemonitoring system validation is presented through various real and simulated test sets. The global sound based system leads to a 3% missed alarm rate and could be fused with other medical sensors to improve performance


ieee automatic speech recognition and understanding workshop | 2003

Audio packet loss over IP and speech recognition

Pedro Mayorga; Laurent Besacier; Richard Lamy; Jean-François Serignat

This paper deals with the effects of packet loss on speech recognition over IP connections. The performance of our continuous French speech recognition system is here evaluated for different transmission scenarios. A packet loss simulation model is first proposed in order to simulate different channel degradation conditions. The packet loss problem is also investigated in real transmissions through IP. Because packet loss impact may be different according to the speech coder used to transmit data, different transmission conditions with different audio codecs are also investigated. Several reconstruction strategies to recover lost information are then proposed, and tested. Another solution for dialog applications is also suggested, where the relative weight of the language and acoustic model is changed according to the packet loss rate. The results show that the speech recognition performance can be augmented by the solutions here presented.


Archive | 2010

Complete Sound and Speech Recognition System for Health Smart Homes: Application to the Recognition of Activities of Daily Living

Michel Vacher; Anthony Fleury; François Portet; Jean-François Serignat; Norbert Noury

This chapter presents the AUDITHIS system which performs real-time sound analysis from eight microphone channels in Health Smart Home associated to the autonomous speech analyzer RAPHAEL. The evaluation of AUDITHIS and RAPHAEL in different settings showed that audio modality is very promising to acquire information that are not available through other classical sensors. Audio processing is also the most natural way for a human to interact with his environment. Thus, this approach particularly fits Health Smart Homes that include home automation (e.g., voice command) or other high level interactions (e.g., dialogue). The originality of the work is also to include sounds of daily living as indicators to distinguish distress from normal situations. First development gave acceptable results for the sound recognition (72% correct classification) and we are working on the reduction of missed-alarm rate to improve performance in the near future. Although the current system suffers a number of limitations and that we raised numerous challenges that need to be addressed, the pair AUDITHIS and RAPHAEL is, to the best of our knowledge, one of the first serious attempts to build a real-time system that consider sound and speech analysis for ambient assisted living. This work also includes several evalu- ations on data acquired from volunteers in a real health smart home condition. Further work will include refinement of the acoustic models to adapt the speech recognition to the aged population as well as connexion to home automation systems.


2009 Proceedings of the 5-th Conference on Speech Technology and Human-Computer Dialogue | 2009

Speech recognition in a smart home: Some experiments for telemonitoring

Michel Vacher; Noe Guirand; Jean-François Serignat; Anthony Fleury; Norbert Noury

Because of the aging of the population, low-cost solutions are required to help people with loss of autonomy staying at home rather than in public health centers. One solution is to assist human operators with smart information systems. In this case, position and physiologic sensors already give important information, but there are few studies about the utility of sound in patients habitation. However, sound classification and speech recognition may greatly increase the versatility of such a system: this will be provided by detecting short sentences or words that could characterize a distress situation for the patient. Moreover, analysis and classification of sounds emitted in patients habitation may be useful for patients activity monitoring. In this paper, we present a global speech and sound recognition system that can be set-up in a flat. Eight microphones were placed in the Health Smart Home of Grenoble (named HIS, a real living flat of 47m2) to automatically analyze and classify different sounds and speech utterances (e.g.: normal or distress French sentences). Sounds are clustered in eight classes but this aspect is not discussed in this paper. For speech signals, an input utterance is recognized and a subsequent process classifies it in normal or distress, by analysing the presence of distress keywords. An experimental protocol was defined and then this system has been evaluated in uncontrolled conditions in which heterogeneous speakers were asked to utter predetermined sentences in the HIS. The results of this experiment, where ten subjects were involved, are presented. The Global Error Rate was 15.6%. Moreover, noise suppression techniques were incorporated in the speech and sound recognition system in order to suppress the noise emitted by known sources like TV or radio. An experimental protocol was defined and tested by four speakers in real conditions inside a room. Finally, we discuss the results of this experiment as a function of the noise source: speech or music.


international conference of the ieee engineering in medicine and biology society | 2003

First steps in data fusion between a multichannel audio acquisition and an information system for home healthcare

G. Virone; Dan Istrate; Michel Vacher; Norbert Noury; Jean-François Serignat; J. Demongeot


2nd Conference on Biomedical Engineering | 2004

Sound Detection and Classification for Medical Telesurvey

Monique Vacher; Dan Istrate; Laurent Besacier; Jean-François Serignat; Eric Castelli


european signal processing conference | 2004

Sound detection and classification through transient models usingwavelet coefficient trees

Michel Vacher; Dan Istrate; Jean-François Serignat


language resources and evaluation | 2004

Spoken and Written Language Resources for Vietnamese.

Viet Bac Le; Do Dat Tran; Eric Castelli; Laurent Besacier; Jean-François Serignat


Smart Object Conference ({SOC}'2003) | 2003

Smart Audio Sensor for Telemedicine

Michel Vacher; Dan Istrate; Laurent Besacier; Eric Castelli; Jean-François Serignat


conference of the international speech communication association | 2008

Preliminary evaluation of speech/sound recognition for telemedicine application in a real environment.

Michel Vacher; Anthony Fleury; Jean-François Serignat; Norbert Noury; Hubert Glasson

Collaboration


Dive into the Jean-François Serignat's collaboration.

Top Co-Authors

Avatar

Michel Vacher

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Dan Istrate

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Eric Castelli

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laurent Besacier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Do Dat Tran

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

François Portet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Monique Vacher

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Denis Beautemps

Grenoble Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge