Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Rosenstein is active.

Publication


Featured researches published by Mark Rosenstein.


Journal of Neurolinguistics | 2010

An automated method to analyze language use in patients with schizophrenia and their first-degree relatives

Brita Elvevåg; Peter W. Foltz; Mark Rosenstein; Lynn E. DeLisi

Communication disturbances are prevalent in schizophrenia, and since it is a heritable illness these are likely present - albeit in a muted form - in the relatives of patients. Given the time-consuming, and often subjective nature of discourse analysis, these deviances are frequently not assayed in large scale studies. Recent work in computational linguistics and statistical-based semantic analysis has shown the potential and power of automated analysis of communication. We present an automated and objective approach to modeling discourse that detects very subtle deviations between probands, their first-degree relatives and unrelated healthy controls. Although these findings should be regarded as preliminary due to the limitations of the data at our disposal, we present a brief analysis of the models that best differentiate these groups in order to illustrate the utility of the method for future explorations of how language components are differentially affected by familial and illness related issues.


Scientific Studies of Reading | 2014

Models of Vocabulary Acquisition: Direct Tests and Text-Derived Simulations of Vocabulary Growth

Andrew Biemiller; Mark Rosenstein; Randall Sparks; Thomas K. Landauer; Peter W. Foltz

Determining word meanings that ought to be taught or introduced is important for educators. A sequence for vocabulary growth can be inferred from many sources, including testing children’s knowledge of word meanings at various ages, predicting from print frequency, or adult-recalled Age of Acquisition. A new approach, Word Maturity, is based on applying Latent Semantic Analysis to patterns of word occurrences in texts used with children. This article reports substantial correlations in the .67 to .74 range between Word Maturity estimates and the ages of acquiring word meanings from two studies of children’s knowledge of word meanings, controlling for homographs. The agreement among these markedly different methods for determining when word meanings are understood opens up new research avenues. In addition, we have found that print frequency is associated with Word Maturity and tested knowledge of word meanings and that understanding concrete meanings required less print frequency exposure than verbally defined meanings.


Schizophrenia Research | 2015

Language as a biomarker in those at high-risk for psychosis

Mark Rosenstein; Peter W. Foltz; Lynn E. DeLisi; Brita Elvevåg

Fig. 1. Superimposed frequency distributions for the control and HR groups of predicted probability of HR group membership colored by actual group membership (probability based on average across all TAT cards). Five misclassified high risk participants are in the bar labeled 0.4, and four misclassified controls are in the bar labeled 0.6 and one in the bar labeled 0.8. Note that most misclassified participants lie near the .5 probability cutoff, and that a control with missing data is the extreme misclassified control. Dear Editor,


Schizophrenia Bulletin | 2017

Thoughts About Disordered Thinking: Measuring and Quantifying the Laws of Order and Disorder

Brita Elvevåg; Peter W. Foltz; Mark Rosenstein; Ramon Ferrer-i-Cancho; Simon De Deyne; Eduardo Mizraji; Alex S. Cohen

1Department of Clinical Medicine, University of Tromsø—The Arctic University of Norway, Tromsø, Norway; 2Norwegian Centre for eHealth Research, University Hospital of North Norway, Tromsø, Norway; 3Institute of Cognitive Science, University of Colorado, Boulder, CO; 4Advanced Computing and Data Science Laboratory, Pearson, Boulder, CO; 5Complexity and Quantitative Linguistics Lab, Departament de Ciències de la Computació, Universitat Politècnica de Catalunya, Barcelona, Spain; 6Computational Cognitive Science Lab, School of Psychology, University of Adelaide, Adelaide, Australia; 7Group of Cognitive Systems Modeling, Biophysics Section, Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay; 8Department of Psychology, Louisiana State University, Baton Rouge, LA


npj Schizophrenia | 2016

Detecting clinically significant events through automated language analysis: Quo imus?

Peter W. Foltz; Mark Rosenstein; Brita Elvevåg

We found the recent paper by Bedi et al.1 simultaneously exciting, heartening and, sadly, a bit discouraging. It shows that modern, statistical natural language processing (NLP) and machine-learning (ML) techniques can potentially be useful as a component of diagnosis, here predicting who among those at risk will eventually transition to full-blown psychosis. This result follows closely our own and others observations of the value of these techniques in, for example, discriminating patients with schizophrenia from controls,2 discriminating schizophrenia probands, first-degree relatives and unrelated healthy controls,3 differentiating those at high risk of psychosis from unrelated putatively healthy participants4 and in a candidate gene study linking language in general to underlying neurobiology,5 all quite encouraging outcomes. Our disappointment is not directly with the Bedi et al.1 paper itself, but that we as a field are, after this long proving period, still at the ‘promising’ stage. This inertia arises from two primary factors. The first is owing to the use of small, often second-hand data sets produced for other studies, which severely constrains the NLP techniques that can be applied and the generality of the obtained results. The second is that the methodologies applied must become sufficiently assimilated into the field to be used effectively in analyses so as to provide valid, reliable measures of the constructs of interest. This understanding permits better linking of the appropriate features of language to the underlying etiologies of interest. To realize the potential of the transformative next steps, we must routinely and systematically strive to obtain larger data sets containing multiple language samples from participants collected over time. This will allow quantifying the joint time course of the disease(s) and changes in language. Increased sample size further improves the methodologies, permitting moving beyond the less-reliable cross-validation to the use of the gold-standard for validating ML results, which is a ‘hold-out’ data set. In such an approach all modeling is conducted blind to the hold-out set, and when modeling is completed, the model is run on the held-out set to measure expected performance in the larger population, thereby ensuring generalization while lowering the risk of overfitting. At least as importantly, realistically sized data sets allow the application of larger combinations of more sophisticated NLP/ML techniques that move beyond the often used simple word-count features. This permits deeper characterization of more important aspects of language, such as semantic structures, discourse organization, as well as acoustic characteristics.6 From our perspective, Figure 3 from Bedi et al.1 is a beautiful, low-dimensional, small, incremental step toward our vision, which is that of a truly high-dimensional language-feature space with the potential to align with the aspirational goals of the NIMH Research Domain Criteria by employing language to locate and pinpoint those with severe mental illness at coordinates within this space. Once localized, the features that define the resulting hypothesized clusters can potentially be calibrated for use in early detection, continuously evaluating treatment and providing links to the biology underlying these diseases, simultaneously superseding our existing diagnostic categories. But this vision is only achievable with purpose-designed studies containing sufficiently large populations with a mix of both healthy participants and individuals sampled across multiple categories of diagnostic groups. Our field must become versed in the use of more powerful applications of NLP/ML techniques and offer more reproducible methodologies. These results, taken with others, are sufficiently encouraging so that it is now time for us to move beyond ‘promising’.


learning at scale | 2015

Analysis of a Large-Scale Formative Writing Assessment System with Automated Feedback

Peter W. Foltz; Mark Rosenstein

Formative writing systems with automated scoring provide opportunities for students to write, receive feedback, and then revise essays in a timely iterative cycle. This paper describes ongoing investigations of a formative writing tool through mining student data in order to understand how the system performs and to measure improvement in student writing. The sampled data included over 1.3M student essays written in response to approximately 200 pre-defined prompts as well as a log of all student actions and computer generated feedback. Analyses both measured and modeled changes in student performance over revisions, the effects of system responses and the amount of time students spent working on assignments. Implications are discussed for employing large-scale data analytics to improve educational outcomes, to understand the role of feedback in writing, to drive improvements in formative technology and to aid in designing better kinds of feedback and scaffolding to support students in the writing process.


International Journal of Testing | 2018

The Influence of Rater Effects in Training Sets on the Psychometric Quality of Automated Scoring for Writing Assessments.

Stefanie A. Wind; Edward W. Wolfe; George Engelhard; Peter W. Foltz; Mark Rosenstein

Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be “trained” using machine-learning techniques that incorporate human ratings. However, the quality of the human ratings used to train the AESEs is rarely examined. As a result, the impact of various rater effects (e.g., severity and centrality) on the quality of AESE-assigned scores is not known. In this study, we use data from a large-scale rater-mediated writing assessment to examine the impact of rater effects on the quality of AESE-assigned scores. Overall, the results suggest that if rater effects are present in the ratings used to train an AESE, the AESE scores may replicate these effects. Implications are discussed in terms of research and practice related to automated scoring.


Cortex | 2014

Category fluency, latent semantic analysis and schizophrenia: a candidate gene approach.

Brita Elvevåg; Peter W. Foltz; Mark Rosenstein; Catherine M. Diaz-Asper; Daniel R. Weinberger


Proceedings of the Annual Meeting of the Cognitive Science Society | 2006

Automated Team Discourse Modeling: Test of Performance and Generalization

Ahmed Abdelali; Peter W. Foltz; Melanie J. Martin; Rob Oberbreckling; Mark Rosenstein


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2007

Predicting Situation Awareness from Team Communications

Cheryl A. Bolstad; Peter W. Foltz; Marita Franzke; Haydee M. Cuevas; Mark Rosenstein; Anthony M. Costello

Collaboration


Dive into the Mark Rosenstein's collaboration.

Top Co-Authors

Avatar

Peter W. Foltz

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Abdelali

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Alex S. Cohen

Louisiana State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge