Ajay N. Jain
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ajay N. Jain.
international conference on acoustics, speech, and signal processing | 1991
Alex Waibel; Ajay N. Jain; Arthur E. McNair; Hiroaki Saito; Alexander G. Hauptmann; Joe Tebelskis
The authors present JANUS, a speech-to-speech translation system that utilizes diverse processing strategies including dynamic programming, stochastic techniques, connectionist learning, and traditional AI knowledge representation approaches. JANUS translates continuously spoken English utterances into Japanese and German speech utterances. The overall system performance on a corpus of conference registration conversations is 87%. Two versions of JANUS are compared: one using a LR parser (JANUS 1) and one using a connectionist parser (JANUS 2). Performance results were mixed, with JANUS 1 deriving benefit from a tighter language model and JANUS 2 benefitting from greater flexibility.<<ETX>>
Neural Computation | 1991
Ajay N. Jain
A modular, recurrent connectionist network is taught to incrementally parse complex sentences. From input presented one word at a time, the network learns to do semantic role assignment, noun phrase attachment, and clause structure recognition, for sentences with both active and passive constructions and center-embedded clauses. The network makes syntactic and semantic predictions at every step. Previous predictions are revised as expectations are confirmed or violated with the arrival of new information. The network induces its own grammar rules for dynamically transforming an input sequence of words into a syntactic/semantic interpretation. The network generalizes well and is tolerant of ill-formed inputs.
international conference on acoustics, speech, and signal processing | 1990
Ajay N. Jain; Alex Waibel
A modular, recurrent connectionist network architecture which learns to robustly perform incremental parsing of complex sentences is presented. From sequential input, one word at a time, the networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. The networks induce their own grammar rules for dynamically transforming an input sequence of words into a syntactic/semantic interpretation. These networks generalize and display tolerance to input which has been corrupted in ways common in spoken language.<<ETX>>
international conference on acoustics, speech, and signal processing | 1992
Ajay N. Jain; Alex Waibel; David S. Touretzky
The authors present PARSEC-a system for generating connectionist parsing networks from example parses. PARSEC is not based on formal grammar systems and has been geared towards spoken language tasks. PARSEC networks exhibit three strengths important for application to speech processing: they learn to parse, and generalize well compared to hand-coded grammars; they tolerate several types of noise; and they can learn to use multimodal input. The authors also present the PARSEC architecture, its training algorithms, and performance analyses along several dimensions that demonstrate PARSECs features. They compare PARSECs performance to that of traditional grammar-based parsing systems.<<ETX>>
Archive | 1991
Ajay N. Jain; Alex Waibel
Traditional methods employed in parsing natural language have focused on developing powerful formalisms to represent syntactic and semantic structure along with rules for transforming language into these formalisms. The builders of such systems must accurately anticipate and model all of the language constructs that their systems will encounter. In loosely structured domains such as spoken language the task becomes very difficult. Connectionist networks that learn to transform input word sequences into meaningful target representations may be useful in such cases.
Archive | 1994
David Chapman; Roger Critchlow; Ajay N. Jain; Rick Lathrop; Tomas L. Perez; Thomas G. Dietterich
neural information processing systems | 1989
Ajay N. Jain; Alex Waibel
neural information processing systems | 1993
Thomas G. Dietterich; Ajay N. Jain; Richard H. Lathrop; Tomás Lozano-Pérez
neural information processing systems | 1991
Alex Waibel; Ajay N. Jain; Arthur E. McNair; Joe Tebelskis; Louise Osterholtz; Hiroaki Saito; Otto Schmidbauer; Tilo Sloboda; Monika Woszczyna
Archive | 1989
Ajay N. Jain