Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin J. Lang is active.

Publication


Featured researches published by Kevin J. Lang.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1989

Phoneme recognition using time-delay neural networks

Alex Waibel; Toshiyuki Hanazawa; Geoffrey E. Hinton; Kiyohiro Shikano; Kevin J. Lang

The authors present a time-delay neural network (TDNN) approach to phoneme recognition which is characterized by two important properties: (1) using a three-layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces, which the TDNN learns automatically using error backpropagation; and (2) the time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independently of position in time and therefore not blurred by temporal shifts in the input. As a recognition task, the speaker-dependent recognition of the phonemes B, D, and G in varying phonetic contexts was chosen. For comparison, several discrete hidden Markov models (HMM) were trained to perform the same task. Performance evaluation over 1946 testing tokens from three speakers showed that the TDNN achieves a recognition rate of 98.5% correct while the rate obtained by the best of the HMMs was only 93.7%. >


Neural Networks | 1990

A time-delay neural network architecture for isolated word recognition

Kevin J. Lang; Alex Waibel; Geoffrey E. Hinton

Abstract A translation-invariant back-propagation network is described that performs better than a sophisticated continuous acoustic parameter hidden Markov model on a noisy, 100-speaker confusable vocabulary isolated word recognition task. The networks replicated architecture permits it to extract precise information from unaligned training patterns selected by a naive segmentation rule.


international colloquium on grammatical inference | 1998

Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm

Kevin J. Lang; Barak A. Pearlmutter; Rodney A. Price

This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale well—both to larger DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. A second winning algorithm, of Hugues Juille, will be described in a separate paper.


conference on object oriented programming systems languages and applications | 1986

Oaklisp: an object-oriented scheme with first class types

Kevin J. Lang; Barak A. Pearlmutter

The Scheme papers demonstrated that lisp could be made simpler and more expressive by elevating functions to the level of first class objects. Oaklisp shows that a message based language can derive similar benefits from having first class types.


Higher-order and Symbolic Computation \/ Lisp and Symbolic Computation | 1988

Oaklisp: An object-oriented dialect of scheme

Kevin J. Lang; Barak A. Pearlmutter

This paper contains a description of Oaklisp, a dialect of Lisp incorporating lexical scoping, multiple inheritance, and first-class types. This description is followed by a revisionist history of the Oaklisp design, in which a crude map of the space of object-oriented Lisps is drawn and some advantages of first-class types are explored. Scoping issues are discussed, with a particular emphasis on instance variables and top-level namespaces. The question of which should come first, the lambda or the object, is addressed, with Oaklisp providing support for the latter approach.


conference on learning theory | 1994

Playing the matching-shoulders lob-pass game with logarithmic regret

Joe Kilian; Kevin J. Lang; Barak A. Pearlmutter

The best previous algorithm for the matching shoulders lob-pass game, ARTHUR (Abe and Takeuchi 1993), suffered <italic>O</italic>(<italic>t</italic><supscrpt>1/2</supscrpt>) regret. We prove that this is the best possible performance for any algorithm that works by accurately estimating the opponents payoff lines. Then we describe an algorithm which beats that bound and meets the information-theoretic lower bound of O(log<italic>t</italic>) regret by converging to the best lob rate <italic>without</italic> accurately estimating the payoff lines. The noise-tolerant binary search procedure that we develop is of independent interest.


international joint conference on artificial intelligence | 1985

Shape recognition and illusory conjunctions

Geoffrey E. Hinton; Kevin J. Lang


neural information processing systems | 1989

Dimensionality Reduction and Prior Knowledge in E-Set Recognition

Kevin J. Lang; Geoffrey E. Hinton


Archive | 1997

A Scheme for Secure Pass-Fail Tests

Joe Kilian; Kevin J. Lang


Archive | 1991

The implementation of oaklisp

Barak A. Pearlmutter; Kevin J. Lang

Collaboration


Dive into the Kevin J. Lang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Waibel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Satish Rao

University of California

View shared research outputs
Top Co-Authors

Avatar

Kiyohiro Shikano

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge