Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yossi Adi is active.

Publication


Featured researches published by Yossi Adi.


international conference on acoustics, speech, and signal processing | 2017

Sequence segmentation using joint RNN and structured prediction models

Yossi Adi; Joseph Keshet; Emily Cibelli; Matthew Goldrick

We describe and analyze a simple and effective algorithm for sequence segmentation applied to speech processing tasks. We propose a neural architecture that is composed of two modules trained jointly: a recurrent neural network (RNN) module and a structured prediction model. The RNN outputs are considered as feature functions to the structured model. The overall model is trained with a structured loss function which can be designed to the given segmentation task. We demonstrate the effectiveness of our method by applying it to two simple tasks commonly used in phonetic studies: word segmentation and voice onset time segmentation. Results suggest the proposed model is superior to previous methods, obtaining state-of-the-art results on the tested datasets.


international workshop on machine learning for signal processing | 2015

Vowel duration measurement using deep neural networks

Yossi Adi; Joseph Keshet; Matthew Goldrick

Vowel durations are most often utilized in studies addressing specific issues in phonetics. Thus far this has been hampered by a reliance on subjective, labor-intensive manual annotation. Our goal is to build an algorithm for automatic accurate measurement of vowel duration, where the input to the algorithm is a speech segment contains one vowel preceded and followed by consonants (CVC). Our algorithm is based on a deep neural network trained at the frame level on manually annotated data from a phonetic study. Specifically, we try two deep-network architectures: convolutional neural network (CNN), and deep belief network (DBN), and compare their accuracy to an HMM-based forced aligner. Results suggest that CNN is better than DBN, and both CNN and HMM-based forced aligner are comparable in their results, but neither of them yielded the same predictions as models fit to manually annotated data.


conference of the international speech communication association | 2016

Automatic measurement of voice onset time and prevoicing using recurrent neural networks

Yossi Adi; Joseph Keshet; Olga Dmitrieva; Matthew Goldrick

Voice onset time (VOT) is defined as the time difference between the onset of the burst and the onset of voicing. When voicing begins preceding the burst, the stop is called prevoiced, and the VOT is negative. When voicing begins following the burst the VOT is positive. While most of the work on automatic measurement of VOT has focused on positive VOT mostly evident in American English, in many languages the VOT can be negative. We propose an algorithm that estimates if the stop is prevoiced, and measures either positive or negative VOT, respectively. More specifically, the input to the algorithm is a speech segment of an arbitrary length containing a single stop consonant, and the output is the time of the burst onset, the duration of the burst, and the time of the prevoicing onset with a confidence. Manually labeled data is used to train a recurrent neural network that can model the dynamic temporal behavior of the input signal, and outputs the events’ onset and duration. Results suggest that the proposed algorithm is superior to the current state-of-the-art both in terms of the VOT measurement and in terms of prevoicing detection.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2018

The influence of lexical selection disruptions on articulation.

Matthew Goldrick; Rhonda McClain; Emily Cibelli; Yossi Adi; Erin Gustafson; Cornelia Moers; Joseph Keshet

Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning—specifically, lexical selection—should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers’ performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations).


international conference on learning representations | 2017

Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks

Yossi Adi; Einat Kermany; Yonatan Belinkov; Ofer Lavi; Yoav Goldberg


arXiv: Machine Learning | 2017

Houdini: Fooling Deep Structured Prediction Models.

Moustapha Cisse; Yossi Adi; Natalia Neverova; Joseph Keshet


neural information processing systems | 2017

Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples

Moustapha Cisse; Yossi Adi; Natalia Neverova; Joseph Keshet


conference of the international speech communication association | 2017

Learning Similarity Functions for Pronunciation Variations.

Einat Naaman; Yossi Adi; Joseph Keshet


Journal of Machine Learning Research | 2016

StructED: risk minimization in structured prediction

Yossi Adi; Joseph Keshet


usenix security symposium | 2018

Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring.

Yossi Adi; Carsten Baum; Moustapha Cisse; Benny Pinkas; Joseph Keshet

Collaboration


Dive into the Yossi Adi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Cibelli

University of California

View shared research outputs
Top Co-Authors

Avatar

Yonatan Belinkov

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge