Lauren M. Stuart
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lauren M. Stuart.
systems, man and cybernetics | 2013
Lauren M. Stuart; Saltanat Tazhibayeva; Amy R. Wagoner; Julia M. Taylor
Stylometry is the quantified (often statistical) analysis of author style as a set of (usually morphosyntactic) features expressed in several documents by the author. The focus of this paper is a task to which stylometry is often applied: authorship attribution, the question of identifying or confirming the author of a text based on the known body of work. We analyze a feature set previously introduced in the field, using a tool and corpus already available. Decomposing the set, we identify the features that seem to have contributed the most to accurate performance. In re-composing the set under different objectives - first, for English-only document sets, and then for possible multi-language use - we identify smaller sets of feature combinations that work well together in accurate performance. We then outline our continuing work based on the results we obtain.
Proceedings of the 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT) on | 2013
Lauren M. Stuart; Saltanat Tazhibayeva; Amy R. Wagoner; Julia M. Taylor
Stylometry is the measurement of certain expressible features of writing style, and its uses include the characterization of authors for recognition in cases of text whose authorship is disputed or unknown. This work builds upon previous investigation into the success of a particular feature set on a particular corpus. We explore the creation and testing of a small corpus spanning multiple languages and character sets, and the building of a feature set for use in author attribution problems over that corpus. In-depth analysis of results is used to motivate further work.
2013 IEEE International Conference on Cybernetics (CYBCO) | 2013
Victor Raskin; Julia M. Taylor; Lauren M. Stuart
The paper starts out with an observation that, in the domain of fuzzy logic, fuzzy sets, computing with words, etc., the charges from the outside that fuzziness equals probability are routinely and calmly rebuffed, but confusing fuzziness with vagueness has not been ultimately dealt with even inside the community. We leave completely aside the category of vagueness that is an artifact of approaches, both in logic and philosophy as well as trends in linguistics, such as formal semantics, that attempt to apply predicate logic of various flavors and complexity to a limited selection of language phenomena, such as quantifiers and scalars that lend themselves to such a treatment. Instead, using a computational semantic approach based on a language-independent ontology and language-specific lexicons, where each entry is anchored in and defined with the help of ontological properties and concepts, the paper claims that, unlike fuzziness, vagueness is not an inherent feature of certain words, phrases, or sentences. In fact, it is suggested that vagueness does not really exist for a human hearer and thus is just a temporary function of discourse, in which the speakers grain size level is coarser than that of the hearer. Since hearers handle it routinely by asking for more details, the paper outlines the computational procedure emulating this ability.
joint ifsa world congress and nafips annual meeting | 2013
Lauren M. Stuart; Julia M. Taylor; Victor Raskin
Prepositional phrase attachment has been explored as a source of both ambiguity and (not unrelated) processing errors. To date, the approaches to resolve this problem in syntactic parsing have been crisp and/or probabilistic, though some approaches look promising for the integration of fuzzy processing. Prepositions, as function words, can be described in syntactic terms, but their interactions with fuzzier content words open them up to fuzziness in use. In order to describe this fuzziness, such that we can compute with it, a set of fuzzy sets and membership functions is presented. The proposed sets and functions describe considerations in prepositional phrase attachment ambiguity, and are discussed in terms of their potential use in a computational parsing system.
systems, man and cybernetics | 2014
Gilchan Park; Lauren M. Stuart; Julia M. Taylor; Victor Raskin
This paper compares the results of computer and human efforts to determine whether an email is legitimate or a phishing attempt. For this purpose, we have run two series of experiments, one for the computer and the other for human subjects. Both experiments addressed the same corpora, one of phishing emails, and the other of legitimate ones. Both the computer and human subjects were asked to detect which emails were phishing and which were legitimate. The results are interesting, both separately and in comparison. Even at this limited, non-semantic state of computation, they indicate that human and computer competences should complement each other, and that, of course, will lead to the integration of human-accessible semantics into computation.
systems, man and cybernetics | 2012
Julia M. Taylor; Victor Raskin; Lauren M. Stuart
This paper revises the classic Ontological Semantics theory with regard to the output of the analyzer. We argue that it is not enough to produce semantic interpretation of text, and syntactic trees should serve not only as clues for semantic processing but also as an output in its own right. We show that it is useful to combine both results of syntactic and semantic processing in a single output while at the same time maintaining its own unique pieces on testbed of several sentences containing complex nominals and compare these results to interpretations of human subjects.
systems, man and cybernetics | 2015
Tatiana R. Ringenberg; Lauren M. Stuart; Julia M. Taylor; Victor Raskin
The paper addresses the phenomenon of direct object defaults in text as part of exploring the meaning of the unsaid and making it accessible for computer understanding. It describes a large but reasonably simple computer experiment on the basis of one hypothesis about defaults, namely, that a true default may appear as a direct object only if modified, e.g., Bob ate fresh food but? Bob ate food. The algorithm reduced over 24,000 occurrences, in Wikipedia for Schools, of the 200 most frequent verbs with and without direct objects, and those direct objects with and without modifiers, to over 200,000 default candidates. A discussion dealt with ways of restricting this list further to actual defaults.
joint ifsa world congress and nafips annual meeting | 2013
Julia M. Taylor; Victor Raskin; Lauren M. Stuart
Following the Computing With Words, in its wider sense, this paper examines the treatment of prepositions with fuzzy logic. In particular, the paper looks at the treatment of two prepositions, in and on, from both the theoretical perspective and that of a text sample analysis. We have shown that the fuzzy analysis of prepositional meaning is helpful for its formal and computational representation, while skirting the thorny issue of the cognitivist metaphorization of abstract meanings of prepositions.
systems, man and cybernetics | 2012
Lauren M. Stuart; Julia M. Taylor; Victor Raskin
Natural language understanding systems are increasingly needed for intuitive, efficient interaction with large information stores. The object-centered nature of these stores - that they encode states, attributes, and relationships of objects - may not be best served by the current verb-driven syntactic paradigm. We develop a highly-parallelizable noun-driven syntax in response, and evaluate its performance against that of other syntactic paradigms through a series of search queries rooted in basic processing operations.
Journal of Innovation in Digital Ecosystems | 2016
Courtney Falk; Lauren M. Stuart
Abstract This paper presents meaning-based machine learning, the use of semantically meaningful input data into machine learning systems in order to produce output that is meaningful to a human user where the semantic input comes from the Ontological Semantics Technology theory of natural language processing. How to bridge from knowledge-based natural language processing architectures to traditional machine learning systems is described to include high-level descriptions of the steps taken. These meaning-based machine learning systems are then applied to problems in information assurance and security that remain unsolved and feature large amounts of natural language text.