Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruce Denby is active.

Publication


Featured researches published by Bruce Denby.


Clinical Linguistics & Phonetics | 2016

Robust contour tracking in ultrasound tongue image sequences

Kele Xu; Yin Yang; Maureen Stone; Aurore Jaumard-Hakoun; Clémence Leboullenger; Gérard Dreyfus; Pierre Roussel; Bruce Denby

ABSTRACT A new contour-tracking algorithm is presented for ultrasound tongue image sequences, which can follow the motion of tongue contours over long durations with good robustness. To cope with missing segments caused by noise, or by the tongue midsagittal surface being parallel to the direction of ultrasound wave propagation, active contours with a contour-similarity constraint are introduced, which can be used to provide ‘prior’ shape information. Also, in order to address accumulation of tracking errors over long sequences, we present an automatic re-initialization technique, based on the complex wavelet image similarity index. Experiments on synthetic data and on real 60 frame per second (fps) data from different subjects demonstrate that the proposed method gives good contour tracking for ultrasound image sequences even over durations of minutes, which can be useful in applications such as speech recognition where very long sequences must be analyzed in their entirety.


international symposium on chinese spoken language processing | 2016

Comparison of DCT and autoencoder-based features for DNN-HMM multimodal silent speech recognition

Licheng Liu; Yan Ji; Hongcui Wang; Bruce Denby

Hidden Markov Model and Deep Neural Network-Hidden Markov Model speech recognition performance for a portable ultrasound + video multimodal silent speech interface is investigated using Discrete Cosine Transform and Deep Auto Encoder-based features with a range of dimensionalities. Experimental results show that the two types of features achieve similar Word Error Rate, but that the autoencoder features maintain good performance even for very low-dimension feature vectors, demonstrating potential as a very compact representation of the information in multimodal silent speech data. It is also shown for the first time that the Deep Network/Markov approach, which has been demonstrated to be beneficial for acoustic speech recognition and for articulatory sensor-based silent speech, improves the silent speech recognition performance for video-based silent speech recognition as well.


Speech Communication | 2018

Updating the Silent Speech Challenge benchmark with deep learning

Yan Ji; Licheng Liu; Hongcui Wang; Zhilei Liu; Zhibin Niu; Bruce Denby

The 2010 Silent Speech Challenge benchmark is updated with new results obtained in a Deep Learning strategy, using the same input features and decoding strategy as in the original article. A Word Error Rate of 6.4% is obtained, compared to the published value of 17.4%. Additional results comparing new auto-encoder-based features with the original features at reduced dimensionality, as well as decoding scenarios on two different language models, are also presented. The Silent Speech Challenge archive has been updated to contain both the original and the new auto-encoder features, in addition to the original raw data.


Mixed Reality and Gamification for Cultural Heritage | 2017

Intangible Cultural Heritage and New Technologies: Challenges and Opportunities for Cultural Preservation and Development

Marilena Alivizatou-Barakou; Alexandros Kitsikidis; Filareti Tsalakanidou; Kosmas Dimitropoulos; Chantas Giannis; Spiros Nikolopoulos; Samer Al Kork; Bruce Denby; Lise Crevier Buchman; Martine Adda-Decker; Claire Pillot-Loiseau; Joëlle Tillmane; Stéphane Dupont; Benjamin Picart; Francesca Pozzi; Michela Ott; Yilmaz Erdal; Vasileios Charisis; Stelios Hadjidimitriou; Marius Cotescu; Christina Volioti; Athanasios Manitsaris; Sotiris Manitsaris; Nikos Grammalidis

Intangible cultural heritage (ICH) is a relatively recent term coined to represent living cultural expressions and practices, which are recognised by communities as distinct aspects of identity. The safeguarding of ICH has become a topic of international concern primarily through the work of United Nations Educational, Scientific and Cultural Organization (UNESCO). However, little research has been done on the role of new technologies in the preservation and transmission of intangible heritage. This chapter examines resources, projects and technologies providing access to ICH and identifies gaps and constraints. It draws on research conducted within the scope of the collaborative research project, i-Treasures. In doing so, it covers the state of the art in technologies that could be employed for access, capture and analysis of ICH in order to highlight how specific new technologies can contribute to the transmission and safeguarding of ICH.


international conference on acoustics, speech, and signal processing | 2016

Contour-based 3D tongue motion visualization using ultrasound image sequences

Kele Xu; Yin Yang; Clémence Leboullenger; Pierre Roussel; Bruce Denby

This article describes a contour-based 3D tongue deformation visualization framework using B-mode ultrasound image sequences. A robust, automatic tracking algorithm characterizes tongue motion via a contour, which is then used to drive a generic 3D Finite Element Model (FEM). A novel contour-based 3D dynamic modeling method is presented. Modal reduction and modal warping techniques are applied to model the deformation of the tongue physically and efficiently. This work can be helpful in a variety of fields, such as speech production, silent speech recognition, articulation training, speech disorder study, etc.


asia pacific signal and information processing association annual summit and conference | 2015

Automatic tongue contour tracking in ultrasound sequences without manual initialization

Hongcui Wang; Siyu Wang; Bruce Denby; Jianwu Dang

Tracking the movement of the tongue is important for understanding how tongue shape change contributes to speech production and control. Ultrasound imaging is widely used to record real time information on the tongue surface; however, noise, artefacts, and the presence of spurious edges render automatic detection of tongue contours without manual initialization difficult. In this paper, we propose a method to extract ultrasound tongue surface contour in a totally automatic way using a three-step procedure: 1) noise reduction using a non-local mean filter; 2) use of a quadratic function to roughly fit the surface contour based on points obtained with a Robert cross operator; and 3) an automatic refinement based on gradient shift and relative distance of candidate points to the initial rough contour point. Experiments are conducted on isolated vowels and on a continuous utterance of vowel sequence. The Mean Sum of Distances criterion shows that the proposed method provides results on a par with the popular EdgeTrak algorithm on these two data sets, as compared to hand-scanned contours, but without any manual initialization.


International Journal of Heritage in the Digital Era | 2015

Acoustic Data Analysis from Multi-Sensor Capture in Rare Singing: Cantu in Paghjella Case Study

Lise Crevier-Buchman; Angelique Amelot; Samer Al Kork; Martine Adda-Decker; Nicolas Audibert; Patrick Chawah; Bruce Denby; Thibault Fux; Aurore Jaumard-Hakoun; Pierre Roussel; Maureen Stone; Jacqueline Vaissière; Kele Xu; Claire Pillot-Loiseau

This paper deals with new capturing technologies to safeguard and transmit endangered intangible cultural heritage including Corsican multipart singing technique. The described work, part of the European FP7 i-Treasures project, aims at increasing our knowledge on rare singing techniques. This paper includes (i) a presentation of our light hyper-helmet with 5 non-invasive sensors (microphone, camera, ultrasound sensor, piezoelectric sensor, electroglottograph), (ii) the data acquisition process and software modules for visualization and data analysis, (iii) a case study on acoustic analysis of voice quality for the UNESCO labelled traditional Cantu in Paghjella. We have identified specific features for this singing style, such as changes in vocal quality, especially concerning the energy in the speaking and singing formant frequency region, a nasal vibration that seems to occur during singing, as well as laryngeal mechanism characteristics. These capturing and analysis technologies will contribute to defi...


conference of the international speech communication association | 2014

An educational platform to capture, visualize and analyze rare singing

Patrick Chawah; Thibaut Fux; Martine Adda-Decker; Angelique Amelot; Nicolas Audibert; Bruce Denby; Gérard Dreyfus; Aurore Jaumard-Hakoun; Claire Pillot-Loiseau; Pierre Roussel; Maureen Stone; Kele Xu; Lise Crevier Buchman


Special Session on Multimodal Capture, Modeling and Semantic Interpretation for Event Analysis, Retrieval and 3D Visualization | 2015

A Novel Human Interaction Game-like Application to Learn, Perform and Evaluate Modern Contemporary Singing - "Human Beat Box"

S. K. Al Kork; Deniz Ugurca; C. Şahin; Patrick Chawah; Lise Crevier Buchman; Martine Adda-Decker; Kele Xu; Bruce Denby; Pierre Roussel; Benjamin Picart; Stéphane Dupont; Filareti Tsalakanidou; Alexandros Kitsikidis; Francesca Maria Dagnino; Michela Ott; Francesca Pozzi; Maureen Stone; Erdal Yilmaz


Special Session on Multimodal Capture, Modeling and Semantic Interpretation for Event Analysis, Retrieval and 3D Visualization | 2018

Novel 3D Game-like Applications Driven by Body Interactions for Learning Specific Forms of Intangible Cultural Heritage

Erdal Yilmaz; Deniz Ugurca; C. Şahin; Francesca Maria Dagnino; Michela Ott; Francesca Pozzi; Kosmas Dimitropoulos; Filareti Tsalakanidou; Alexandros Kitsikidis; S. K. Al Kork; Kele Xu; Bruce Denby; Pierre Roussel; Patrick Chawah; Lise Crevier Buchman; Martine Adda-Decker; Stéphane Dupont; Benjamin Picart; Joëlle Tilmanne; Marilena Alivizatou; Vassilios S. Charisis; Alina Glushkova; Christina Volioti; Athanasios Manitsaris; Edgar Hemery; Fabien Moutarde; Nikos Grammalidis

Collaboration


Dive into the Bruce Denby's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Filareti Tsalakanidou

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Francesca Pozzi

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Michela Ott

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge