Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melanie J. Martin is active.

Publication


Featured researches published by Melanie J. Martin.


Computational Linguistics | 2004

Learning Subjective Language

Janyce Wiebe; Theresa Wilson; Rebecca F. Bruce; Matthew Bell; Melanie J. Martin

Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.


annual meeting of the special interest group on discourse and dialogue | 2001

A corpus study of evaluative and speculative language

Janyce Wiebe; Rebecca F. Bruce; Matthew Bell; Melanie J. Martin; Theresa Wilson

This paper presents a corpus study of evaluative and speculative language. Knowledge of such language would be useful in many applications, such as text categorization and summarization. Analyses of annotator agreement and of characteristics of subjective language are performed. This study yields knowledge needed to design effective machine learning systems for identifying subjective language.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Some Promising Results of Communication-Based Automatic Measures of Team Cognition

Preston A. Kiekel; Nancy J. Cooke; Peter W. Foltz; Jamie C. Gorman; Melanie J. Martin

Some have argued that the most appropriate measure of team cognition is a holistic measure directed at the entire team. In particular, communication data are useful for measuring team cognition because of the holistic nature of the data, and because of the connection between communication and declarative cognition. In order to circumvent the logistic difficulties of communication data, the present paper proposes several relatively automatic methods of analysis. Four data types are identified, with low-level physical data vs. content data being one dimension, and sequential vs. static data being the other. Methods addressing all four of these data types are proposed, with the exception of static physical data. Latent Semantic Analysis is an automatic method used to assess content, either statically or sequentially. PRONET is useful to address either physical or content-based sequential data, and we propose CHUMS to address sequential physical data. The usefulness of each method to predict team performance data is assessed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2003

Evaluation of Latent Semantic Analysis-Based Measures of Team Communications Content

Jamie C. Gorman; Peter W. Foltz; Preston A. Kiekel; Melanie J. Martin; Nancy J. Cooke

Team process is thought to mediate team member inputs and team performance. Among the team behaviors identified as process variables, team communications have been widely studied. We view team communications as a team behavior and also as team information processing, or team cognition. Within the context of a Predator Uninhabited Air Vehicle (UAV) synthetic task, we have developed several methods of communications content assessment based on Latent Semantic Analysis (LSA). These methods include: Communications Density (CD) which is the average task relevance of a teams communications, Lag Coherence (LC) which measures task-relevant topic shifting over UAV missions, and Automatic Tagging (AT) which categorizes team communications. Each method is described in detail. CD and LC are related to UAV team performance. AT-human is comparable to human-human agreement on content coding. The results are promising for the assessment of teams based on LSA applied to communication content.


Human Factors | 2016

Cross-Level Effects Between Neurophysiology and Communication During Team Training

Jamie C. Gorman; Melanie J. Martin; Terri A. Dunbar; Ronald H. Stevens; Trysha Galloway; Polemnia G. Amazeen; Aaron D. Likens

Objective: We investigated cross-level effects, which are concurrent changes across neural and cognitive-behavioral levels of analysis as teams interact, between neurophysiology and team communication variables under variations in team training. Background: When people work together as a team, they develop neural, cognitive, and behavioral patterns that they would not develop individually. It is currently unknown whether these patterns are associated with each other in the form of cross-level effects. Method: Team-level neurophysiology and latent semantic analysis communication data were collected from submarine teams in a training simulation. We analyzed whether (a) both neural and communication variables change together in response to changes in training segments (briefing, scenario, or debriefing), (b) neural and communication variables mutually discriminate teams of different experience levels, and (c) peak cross-correlations between neural and communication variables identify how the levels are linked. Results: Changes in training segment led to changes in both neural and communication variables, neural and communication variables mutually discriminated between teams of different experience levels, and peak cross-correlations indicated that changes in communication precede changes in neural patterns in more experienced teams. Conclusion: Cross-level effects suggest that teamwork is not reducible to a fundamental level of analysis and that training effects are spread out across neural and cognitive-behavioral levels of analysis. Cross-level effects are important to consider for theories of team performance and practical aspects of team training. Application: Cross-level effects suggest that measurements could be taken at one level (e.g., neural) to assess team experience (or skill) on another level (e.g., cognitive-behavioral).


north american chapter of the association for computational linguistics | 2004

Automated team discourse annotation and performance prediction using LSA

Melanie J. Martin; Peter W. Foltz

We describe two approaches to analyzing and tagging team discourse using Latent Semantic Analysis (LSA) to predict team performance. The first approach automatically categorizes the contents of each statement made by each of the three team members using an established set of tags. Performance predicting the tags automatically was 15% below human agreement. These tagged statements are then used to predict team performance. The second approach measures the semantic content of the dialogue of the team as a whole and accurately predicts the teams performance on a simulated military mission.


international conference on augmented cognition | 2013

Analysis of Semantic Content and Its Relation to Team Neurophysiology during Submarine Crew Training

Jamie C. Gorman; Melanie J. Martin; Terri A. Dunbar; Ronald H. Stevens; Trysha Galloway

A multi-level framework for analyzing team cognition based on team communication content and team neurophysiology is described. The semantic content of team communication in submarine training crews is quantified using Latent Semantic Analysis (LSA), and their team neurophysiology is quantified using the previously described neurophysiologic synchrony method. In the current study, we validate the LSA communication metrics by demonstrating their sensitivity to variations in training segment and by showing that less experienced (novice) crews can be differentiated from more experienced crews based on the semantic relatedness of their communications. Cross-correlations between an LSA metric and a team neurophysiology metric are explored to examine fluctuations in the lead-lag relationship between team communication and team neurophysiology as a function of training segment and level of team experience. Finally, the implications of this research for team training and assessment are considered.


north american chapter of the association for computational linguistics | 2010

Reliability and Type of Consumer Health Documents on the World Wide Web: an Annotation Study

Melanie J. Martin

BackgroundIn this paper we present a detailed scheme for annotating medical web pages designed for health care consumers. The annotation is along two axes: first, by reliability (the extent to which the medical information on the page can be trusted), second, by the type of page (patient leaflet, commercial, link, medical article, testimonial, or support).ResultsWe analyze inter-rater agreement among three judges for each axis. Inter-rater agreement was moderate (0.77 accuracy, 0.62 F-measure, 0.49 Kappa) on the page reliability axis and good (0.81 accuracy, 0.72 F-measure, 0.73 Kappa) along the page type axis.ConclusionsWe have shown promising results in this study that appropriate classes of pages can be developed and used by human annotators to annotate web pages with reasonable to good agreement.AvailabilityNo.


international acm sigir conference on research and development in information retrieval | 2004

Reliability and verification of natural language text on the world wide web (abstract only)

Melanie J. Martin

The hypothesis that information on the Web can be verified automatically, with minimal user interaction, will be tested by building and evaluating an interactive system. In this paper, verification is defined as a reasonable determination of the truth or correctness of a statement by examination, research, or comparison with similar text. The system will contain modules for reliability ranking, query processing, document retrieval, and document clustering based on agreement. The query processing and document retrieval components will use standard IR techniques. The reliability module will estimate the likelihood that a statement on the Web can be trusted using standards developed by information scientists, as well as linguistic aspects of the page and the link structure of associated web pages. The clustering module will cluster relevant documents based on whether or not they agree or disagree with the statement to be verified. Relevant references are discussed.


Proceedings of the Annual Meeting of the Cognitive Science Society | 2006

Automated Team Discourse Modeling: Test of Performance and Generalization

Ahmed Abdelali; Peter W. Foltz; Melanie J. Martin; Rob Oberbreckling; Mark Rosenstein

Collaboration


Dive into the Melanie J. Martin's collaboration.

Top Co-Authors

Avatar

Jamie C. Gorman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter W. Foltz

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Nancy J. Cooke

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Preston A. Kiekel

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Janyce Wiebe

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Matthew Bell

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Rebecca F. Bruce

University of North Carolina at Asheville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terri A. Dunbar

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Theresa Wilson

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge