Brian M. Slator
North Dakota State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian M. Slator.
Machine Translation | 1990
Yorick Wilks; Dan Fass; Cheng-ming Guo; James E. McDonald; Tony Plate; Brian M. Slator
Machine readable dictionaries (Mrds) contain knowledge about language and the world essential for tasks in natural language processing (Nlp). However, this knowledge, collected and recorded by lexicographers for human readers, is not presented in a manner for Mrds to be used directly for Nlp tasks. What is badly needed are machine tractable dictionaries (Mtds): Mrds transformed into a format usable for Nlp. This paper discusses three different but related large-scale computational methods to transform Mrds into Mtds. The Mrd used is The Longman Dictionary of Contemporary English (Ldoce). The three methods differ in the amount of knowledge they start with and the kinds of knowledge they provide. All require some handcoding of initial information but are largely automatic. Method I, a statistical approach, uses the least handcoding. It generates “relatedness” networks for words in Ldoce and presents a method for doing partial word sense disambiguation. Method II employs the most handcoding because it develops and builds lexical entries for a very carefully controlled defining vocabulary of 2,000 word senses (1,000 words). The payoff is that the method will provide an Mtd containing highly structured semantic information. Method III requires the handcoding of a grammar and the semantic patterns used by its parser, but not the handcoding of any lexical material. This is because the method builds up lexical material from sources wholly within Ldoce. The information extracted is a set of sources of information, individually weak, but which can be combined to give a strong and determinate linguistic data base.
international conference on computational linguistics | 1990
Louise Guthrie; Brian M. Slator; Yorick Wilks; Rebecca F. Bruce
We describe a technique for automatically constructing a taxonomy of word senses from a machine readable dictionary. Previous taxonomies developed from dictionaries have two properties in common. First, they are based on a somewhat loosely defined notion of the IS-A relation. Second, they require human intervention to identify the sense of the genus term being used. We believe that for taxonomies of this type to serve a useful role in subsequent natural language processing tasks, the taxonomy must be based on a consistent use of the IS-A relation which allows inheritance and transitivity. We show that hierarchies of this type can be automatically constructed, by using the semantic category codes and the subject codes of the Longman Dictionary of Contemporary English (LDOCE) to disambiguate the genus terms in noun definitions. In addition, we discuss how certain genus terms give rise to other semantic relations between definitions.
international conference on computational linguistics | 1988
Yorick Wilks; Dan Fass; Cheng-ming Guo; James E. McDonald; Tony Plate; Brian M. Slator
This paper discusses three different but related large-scale computational methods for the transformation of machine readable dictionaries (MRDs) into machine tractable dictionaries, i.e., MRDs converted into a format usable for natural language processing tasks. The MRD used is The Longman Dictionary of Contemporary English.
Communications of The ACM | 1986
Brian M. Slator; Matthew P. Anderson; Walt Conley
Appropriate mnemonic feedback built into a natural-language interface can act as a teacher to help users acquire formal-language skills as they work, without a large initial investment of effort in a learning period.
Journal of Network and Computer Applications | 1999
Brian M. Slator; Paul Juell; Philip E. McClean; Bernhardt Saini-Eidukat; Donald P. Schwert; Alan R. White; Curt Hill
WWWIC, the NDSU World Wide Web Instructional Committee, is engaged in developing a range of virtual environments for education. These projects span a range of disciplines, from earth science to anthropology, and from business to biology. However, all of these projects share a strategy, a set of assumptions, an approach to assessment, and an emerging tool set, which allows each to leverage from the insights and advances of the others.
international conference on information technology: new generations | 2010
Omar El Ariss; Dianxiang Xu; Santosh Dandey; Bradley Vender; Philip E. McClean; Brian M. Slator
In this paper we propose a testing strategy that targets Java applications with complex GUI structure and event interactions. We present a capture and replay testing technique which can be employed for different testing purposes: GUI convergence, functional testing and regression testing. The proposed strategy drastically improves, and from different aspects, on standard capture and replay tools. This is done by combining both a model based testing approach with the capture and replay approach and by implementing different automated test oracles. We first model the behavior of the system from the functional specifications or from a trusted version of the system. Tests are then derived from this model to exercise the system in order to ensure correct functional behavior and to cover goal oriented interactions. The case study applies the test strategy on a role-based, multi-user computer game to demonstrate the usefulness and importance of this approach.
intelligent tutoring systems | 1996
Brian M. Slator; Harold Cliff Chaput
People will invest extraordinary time and effort into learning how to play and win a game. Virtual role-playing environments can be a powerful mechanism of instruction, provided they are constructed such that learning how to play and win the game contributes to a players understanding of real-world concepts and procedures. This paper describes a pedagogical architecture and an implemented application where students assume a role in a simulated multi-media environment and learn about the real world by competing with other players. The game, which teaches principles of micro-economics, is an implementation of a networked, multiplayer, simulation-based, interactive multi-media, educational environment that illustrates the principles of learning by learning roles.
Virtual Reality | 2005
Brian M. Slator; Harold Cliff Chaput; Robert Cosmano; Ben Dischinger; Christopher Imdieke; Bradley Vender
Virtual role-playing environments can be a powerful mechanism of instruction, provided they are constructed such that learning how to play and win the game contributes to a player’s understanding of real-world concepts and procedures. North Dakota State University (NDSU) provides students with environments to enhance their understanding of geology (Planet Oit), cellular biology (Virtual Cell), programming languages (ProgrammingLand), retailing (Dollar Bay), and history (Blackwood). These systems present a number of opportunities and an equal number of challenges. Players are afforded a role-based, multi-user, ‘learn-by-doing’ experience, with software agents acting as both environmental effects and tutors and the possibilities of multi-user cooperation and collaboration. However, technical issues and one important cultural issue present a range of difficulties. The Dollar Bay environment, its particular challenges, and the solutions to these are presented.
Knowledge Acquisition | 1989
Brian M. Slator
Knowledge acquisition from text is identified as a desirable goal and an incremental, bootstrapping approach to this is outlined. As a first step, a subsystem that produces text-specific lexicons from selected machine-readable dictionary definitions has been developed. The input to this subsystem is unconstrained text; the database is Longmans Dictionary of Contemporary English (LDOCE, a machine-readable dictionary which is, itself, simply a special purpose text); the output is a collection of lexical semantic objects, one for every sense of every word in the text. Each lexical semantic object in this lexicon is in a general purpose predicate and frame representation. A relative contextual score is computed for selected objects; these scores provide a simple metric for comparing competing word senses to address the problem of lexical ambiguity. Further, the text of selected dictionary definitions are analysed, to enrich the resulting representation. The result of this processing is a knowledge-base suitable for a larger system of Preference Semantics and knowledge-based parsing—the next step in the bootstrapping schedule.
international workshop/conference on parsing technologies | 1991
Brian M. Slator; Yorick Wilks
In recent years there has been a renewed emphasis onscalein computational linguistics, and a certain disillusionment with the so-called “toy systems” of linguistic and AI models. Recent years have also seen rising interest in the computational analysis of machine-readable dictionaries as a lexical resource for parsing. These two trends are neither accidental nor independent of each other; an obvious place to look for large-scale linguistic information is in existing dictionaries.