Steven L. Lytinen
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven L. Lytinen.
Computers & Mathematics With Applications | 1992
Steven L. Lytinen
This paper surveys representation and processing theories arising out of conceptual dependency theory. One of the primary characteristics of conceptual dependency was the notion of a canonical form, built out of a small number of primitive representations. Although the notion of primitives has largely been lost in subsequent work, many other of the basic notions of CD have remained. In particular, the idea of building representations around inferential c.~p~Uitics has prevailed in this family of research. The result is & set of representational structures, all of which are highly knowleds~intenslve. The use of these structures in various processln K theories has led to knowledge-based theories of ~ onderstandlng, plavning, reasoning and other tMIcR~ which have contrasted sharply with the traditional search-oriented approaches used in other systems.
Computers & Mathematics With Applications | 1992
Steven L. Lytinen
Abstract This paper presents a natural language processing (NLP) system called LINK. LINK is unification-based, and incorporates and extends many features which have been emerging from other NLP research in recent years. In particular, the notions of autonomous syntax and compositional semantics long staples of NLP systems, have been replaced by a grammar which is much more complex, semantics-oriented, and more reliant on idiomatic constructions; and a semantics which is noncompositional. Processing, also, has been changed from the traditional syntax-driven approach, to an approach which relies much more heavily on semantics and domain knowledge, presented in a semantic net. As a result, LINK is able to efficiently process ungrammatical sentences, as well as nonliteral constructions such as methphor and metonymy. These tasks have been difficult for more traditional NLP systems.
computational intelligence | 1992
Steven L. Lytinen; Robert R. Burridge; Jeffrey D. Kirtner
Based on psychological studies which show that metaphors and other non‐literal constructions are comprehended in the same amount of time as comparable literal constructions, some researchers have concluded that literal meaning is not computed during comprehension of non‐literal constructions. In this paper, we suggest that the empirical evidence does not rule out the possibility that literal meaning is constructed. We present a computational model of comprehension of non‐literal expressions which is consistent with the data, but in which literal meaning is computed. This model has been implemented as part of a unification‐based natural language processing system, called LINK.
Lexical Ambiguity Resolution#R##N#Perspective from Psycholinguistics, Neuropsychology and Artificial Intelligence | 1988
Steven L. Lytinen
Publisher Summary A difficult problem facing the natural language understanding systems which use a frame-based representation scheme is the frame selection problem, i.e., the task of selecting the appropriate frame(s) to represent the meaning of a given input. In many previous natural language understanding systems, frame selection has been viewed as a lexical ambiguity problem. However in this approach, words that can refer to more than one possible frame are treated as ambiguous; there is a list of the possible frames to which they could refer in that part of the systems knowledge. Frame selection rules take on the form of disambiguation rules, where they are responsible for choosing the sense of an ambiguous word used in a given context. This chapter explains that treating frame selection as lexical disambiguation is a mistake for a large class of words, known as vague words. It proposes an approach in which frame selection for these words is performed by a concept refinement process. In this approach, initially the representation of a vague word is also vague. The initially vague representation is refined or specified as more information about the context is filled in by the parser. The chapter also presents a general set of rules, including the script activation rule, expected event specialization rule, and slot-filler specialization rule, that can be used to select frames for vague words.
international conference on computational linguistics | 1992
Horng-Jyh P. Wu; Steven L. Lytinen
Previous approaches have used a reasoning mechanism called belief percolation to determine the actual speech intent of the speaker (e.g., (Wilks and Bien 1979)). In this paper, a similar mechanism, called attitude emergence, is proposed as a mechanism for inferring a speakers attitude toward the propositions in a persuasive discourse. It is shown that in order to adequately interpret the statements in advertisements, associations of relevant semantic information, through bridging inferences, are to be percolated up through attitude model contexts to enhance and calibrate the interpretation of statements. A system called BUYER is being implemented to recognize speech intents through attitude emergence in the domain of food advertisements taken from Readers Digest. An example of BUYERs processing is also presented in the paper.
Integrated Computer-aided Engineering | 1994
Steven L. Lytinen; Peter M. Hastings
We present an incremental approach to the task of learning word definitions, including both syntactic and semantic information, from the use of unknown words in context. It is implemented in a unification-based natural language system called LINK. The approach naturally fits into LINKs normal language processing. Unification operations, applied routinely during processing, provide the learning algorithm with both syntactic and semantic information, which it uses to formulate hypotheses about word definitions. We describe the LINK system and the learning algorithm, and present the results of an empirical test, in which the algorithm was used on a limited-domain application corpus.
MUC4 '92 Proceedings of the 4th conference on Message understanding | 1992
Steven L. Lytinen; Sayan Bhattacharyya; Robert R. Burridge; Peter M. Hastings; Christian R. Huyck; Karen A. Lipinsky; Eric S. McDaniel; Karenann K. Terrell
The University of Michigans natural language processing system, called LINK, was used in the Fourth Message Understanding System Evaluation (MUC-4). LINKs performance on MUC-4s two test corpora is summarized in figure 1.
MUC4 '92 Proceedings of the 4th conference on Message understanding | 1992
Steven L. Lytinen; Sayan Bhattacharyya; Robert R. Burridge; Peter M. Hastings; Christian R. Huyck; Karen A. Lipinsky; Eric S. McDaniel; Karenann K. Terrell
Over the past five years, we have developed a natural language processing (NLP) system called LINK. LINK is a unification-based system, in which all syntactic and semantic analysis is performed in a single step. Syntactic and semantic information are both represented in the grammar in a uniform manner, similar to HPSG (Pollard and Sag, 1987).
Archive | 1991
Peter M. Hastings; Steven L. Lytinen; Robert K. Lindsay
We present an incremental approach to the task of learning words from context. The approach relies on a hierarchical organization of semantic information. The search through the hierarchy for the meaning of an undefined word is guided by semantic role assignments inferred during processing that involve the undefined word. Our approach has been implemented in a natural language processing system called LINK. We present an empirical test of our learning method, and discuss the test results.
Archive | 1994
Peter M. Hastings; Steven L. Lytinen