Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Igor Zlokarnik is active.

Publication


Featured researches published by Igor Zlokarnik.


Journal of the Acoustical Society of America | 1996

Accurate recovery of articulator positions from acoustics: New conclusions based on human data

John Hogden; Anders Löfqvist; Vince Gracco; Igor Zlokarnik; Philip E. Rubin; Elliot Saltzman

Vocal tract models are often used to study the problem of mapping from the acoustic transfer function to the vocal tract area function (inverse mapping). Unfortunately, results based on vocal tract models are strongly affected by the assumptions underlying the models. In this study, the mapping from acoustics (digitized speech samples) to articulation (measurements of the positions of receiver coils placed on the tongue, jaw, and lips) is examined using human data from a single speaker: Simultaneous acoustic and articulator measurements made for vowel-to-vowel transitions, /g/ closures, and transitions into and out of /g/ closures. Articulator positions were measured using an EMMA system to track coils placed on the lips, jaw, and tongue. Using these data, look-up tables were created that allow articulator positions to be estimated from acoustic signals. On a data set not used for making look-up tables, correlations between estimated and actual coil positions of around 94% and root-mean-squared errors around 2 mm are common for coils on the tongue. An error source evaluation shows that estimating articulator positions from quantized acoustics gives root-mean-squared errors that are typically less than 1 mm greater than the errors that would be obtained from quantizing the articulator positions themselves. This study agrees with and extends previous studies of human data by showing that for the data studied, speech acoustics can be used to accurately recover articulator positions.


Journal of the Acoustical Society of America | 2004

Voice-activated control for electrical device

Igor Zlokarnik; Daniel L. Roth

An apparatus for voice-activated control of an electrical device comprises a receiving arrangement for receiving audio data generated by user. A vioce recognition arrangement is provided for determining whether the received audio data is a command word for controlling the electrical device. The voice recognition arrangement includes a microprocessor for comparing the received audio data with voice recognition data previously stored in the voice recognition arrangement. The voice recognition arrangment generates at least one control signal based on the comparison when the comparison reaches a predetermined threshold value. A power control controls power delivered to the electrical device. The power control is responsive to at least one control signal generated by the voice recognition arrangement for operating the electrical device in response to the at least one audio command generated by the user. An arrangement for adjusting the predetermined threshold value is provided to cause a control signal to be generated by the voice recognition arrangement when the audio data generated by the user varies from the previously stored voice recognition data.


Journal of the Acoustical Society of America | 1995

Adding articulatory features to acoustic features for automatic speech recognition

Igor Zlokarnik

A hidden‐Markov‐model (HMM) based speech recognition system was evaluated that makes use of simultaneously recorded acoustic and articulatory data. The articulatory measurements were gathered by means of electromagnetic articulography and describe the movement of small coils fixed to the speakers’ tongue and jaw during the production of German V1CV2 sequences [P. Hoole and S. Gfoerer, J. Acoust. Soc. Am. Suppl. 1 87, S123 (1990)]. Using the coordinates of the coil positions as an articulatory representation, acoustic and articulatory features were combined to make up an acoustic–articulatory feature vector. The discriminant power of this combined representation was evaluated for two subjects on a speaker‐dependent isolated word recognition task. When the articulatory measurements were used both for training and testing the HMMs, the articulatory representation was capable of reducing the error rate of comparable acoustic‐based HMMs by a relative percentage of more than 60%. In a separate experiment, the a...


Journal of the Acoustical Society of America | 1995

Articulatory kinematics from the standpoint of automatic speech recognition

Igor Zlokarnik

The discriminant power of articulatory movements was evaluated for six subjects on a speaker‐dependent continuous speech recognition task using a hidden‐Markov‐model‐based speech recognition system. The articulatory measurements were gathered by means of electromagnetic articulography and describe the movement of small coils fixed to the speakers’ tongue, jaw, and lower lip during the production of 108 German sentences. Four different articulatory representations were evaluated: coil displacements and their first three time derivatives (coil velocities, accelerations, and jerks). From these four representations, the coil accelerations performed by far the best in terms of recognition performance, both with and without acoustic features. The superior performance of acceleration features is surprising from the viewpoint of automatic speech recognition based on acoustics, since in the acoustic domain, acceleration features perform worse than static features on speaker‐dependent tasks. From the viewpoint of a...


Journal of the Acoustical Society of America | 1996

Two cross-linguistic factors underlying tongue shapes for vowels

David Nix; George Papcun; John Hogden; Igor Zlokarnik


Archive | 2005

Automatic speech recognition channel normalization

Igor Zlokarnik; Laurence S. Gillick; Jordan Cohen


Archive | 2005

Speech recognition channel normalization utilizing measured energy values from speech utterance

Igor Zlokarnik; Laurence S. Gillick; Jordan Cohen


Archive | 2011

Automatic speech recognition channel normalization method and system

Igor Zlokarnik; Jordan Cohen; Gillick Laurence S


Archive | 2005

Automatic speech recognition channel normalization based on measured statistics from initial portions of speech utterances

Igor Zlokarnik; Laurence S. Gillick; Jordan Cohen


Archive | 2005

Normalization of cepstral features for speech recognition

Igor Zlokarnik; Laurence S. Gillick; Jordan Cohen

Collaboration


Dive into the Igor Zlokarnik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Hogden

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David Nix

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Papcun

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge