Danielle S. McNamara
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Danielle S. McNamara.
Behavior Research Methods Instruments & Computers | 2004
Arthur C. Graesser; Danielle S. McNamara; Max M. Louwerse; Zhiqiang Cai
Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, Coh-Metrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.
Discourse Processes | 1996
Danielle S. McNamara; Walter Kintsch
Two experiments, theoretically motivated by the construction‐integration model of comprehension (W. Kintsch, 1988), investigated effects of prior knowledge on learning from high‐ and low‐coherence history texts. In Experiment 1, participants’ comprehension was examined through free recall, multiple‐choice questions, and a keyword sorting task. An advantage was found for the high‐coherence text on recall and multiple‐choice questions. However, high‐knowledge readers performed better on the sorting task after reading the low‐coherence text. In Experiment 2, participants’ comprehension was examined through open‐ended questions and the sorting task both immediately and after a 1‐week delay. Little effect of delay was found, and the previous sorting task results failed to replicate. As predicted, high‐knowledge readers performed better on the open‐ended questions after reading the low‐coherence text. Reading times from both experiments indicated that the low‐coherence text requires more inference processes. Th...
Educational Researcher | 2011
Arthur C. Graesser; Danielle S. McNamara; Jonna M. Kulikowich
Computer analyses of text characteristics are often used by reading teachers, researchers, and policy makers when selecting texts for students. The authors of this article identify components of language, discourse, and cognition that underlie traditional automated metrics of text difficulty and their new Coh-Metrix system. Coh-Metrix analyzes texts on multiple measures of language and discourse that are aligned with multilevel theoretical frameworks of comprehension. The authors discuss five major factors that account for most of the variance in texts across grade levels and text categories: word concreteness, syntactic simplicity, referential cohesion, causal cohesion, and narrativity. They consider the importance of both quantitative and qualitative characteristics of texts for assigning the right text to the right student at the right time.
Memory & Cognition | 2001
Danielle S. McNamara; Jennifer L. Scott
In this study, we examine the role of strategy use in working memory (WM) tasks by providing short-term memory (STM) task strategy training to participants. In Experiment 1, the participants received four sessions of training to use a story-formation (i.e., chaining) strategy. There were substantial improvements from pretest to posttest (after training) in terms of both STM and WM task performance. Experiment 2 demonstrated that WM task improvement did not occur for control participants, who were given the same amount of practice but were not provided with strategy instructions. An assessment of participants’ strategy use on the STM task before training indicated that more strategic participants displayed better WM task performance and better verbal skills. These results support our hypothesis that strategy use influences performance on WM tasks.
Written Communication | 2010
Danielle S. McNamara; Scott A. Crossley; Philip M. McCarthy
In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by linguistic indices of cohesion (i.e., coreference and connectives), syntactic complexity (e.g., number of words before the main verb, sentence structure overlap), the diversity of words used by the writer, and characteristics of words (e.g., frequency, concreteness, imagability). The three most predictive indices of essay quality in this study were syntactic complexity (as measured by number of words before the main verb), lexical diversity (as measured by the Measure of Textual Lexical Diversity), and word frequency (as measured by Celex, logarithm for all words). Using 26 validated indices of cohesion from Coh-Metrix, none showed differences between high- and low-proficiency essays and no indices of cohesion correlated with essay ratings. These results indicate that the textual features that characterize good student writing are not aligned with those features that facilitate reading comprehension. Rather, essays judged to be of higher quality were more likely to contain linguistic features associated with text difficulty and sophisticated language.
Educational Psychologist | 2005
Arthur C. Graesser; Danielle S. McNamara; Kurt VanLehn
It is well-documented that most students do not have adequate proficiencies in inquiry and metacognition, particularly at deeper levels of comprehension that require explanatory reasoning. The proficiencies are not routinely provided by teachers and normal tutors so it is worthwhile to turn to computer-based learning environments. This article describes some of our recent computer systems that were designed to facilitate explanation-centered learning through strategies of inquiry and metacognition while students learn science and technology content. Point&Query augments hypertext, hypermedia, and other learning environments with question-answer facilities that are under the learner control. AutoTutor and iSTART use animated conversational agents to scaffold strategies of inquiry, metacognition, and explanation construction. AutoTutor coaches students in generating answers to questions that require explanations (e.g., why, what-if, how) by holding a mixed-initiative dialogue in natural language. iSTART models and coaches students in constructing self-explanations and in applying other metacomprehension strategies while reading text. These systems have shown promising results in tests of learning gains and learning strategies.
Psychology of Learning and Motivation | 2009
Danielle S. McNamara; Joe Magliano
The goal of this chapter is to provide the foundation toward developing a more comprehensive model of reading comprehension. To this end, seven prominent comprehension models (Construction–Integration, Structure-Building, Resonance, Event-Indexing, Causal Network, Constructionist, and Landscape) are described, evaluated, and compared. We describe what comprehension models have offered thus far, differences and similarities between them, and what comprehension processes are not included within any of the models, and thus, what should be included in a comprehensive model. Our primary conclusion from the review of this literature is that current models of comprehension are not necessarily contradictory, but rather cover different spectrums of comprehension processes. Further, no one model adequately accounts for a wide variety of reading situations that have been observed and the range of comprehension considered thus far in comprehension models is too limited.
Behavior Research Methods Instruments & Computers | 2004
Danielle S. McNamara; Irwin B. Levinstein; Chutima Boonthum
Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters—an instructor character and two student characters— and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.
Discourse Processes | 2010
Danielle S. McNamara; Max M. Louwerse; Philip M. McCarthy; Arthur C. Graesser
This study addresses the need in discourse psychology for computational techniques that analyze text on multiple levels of cohesion and text difficulty. Discourse psychologists often investigate phenomena related to discourse processing using lengthy texts containing multiple paragraphs, as opposed to single word and sentence stimuli. Characterizing such texts in terms of cohesion and coherence is challenging. Some computational tools are available, but they are either fragmented over different databases or they assess single, specific features of text. Coh-Metrix is a computational linguistic tool that measures text cohesion and text difficulty on a range of word, sentence, paragraph, and discourse dimensions. This study investigated the validity of Coh-Metrix as a measure of cohesion in text using stimuli from published discourse psychology studies as a benchmark. Results showed that Coh-Metrix indexes of cohesion (individually and combined) significantly distinguished the high- versus low-cohesion versions of these texts. The results also showed that commonly used readability indexes (e.g., Flesch–Kincaid) inappropriately distinguished between low- and high-cohesion texts. These results provide a validation of Coh-Metrix, thereby paving the way for its use by researchers in cognitive science, discourse processes, and education, as well as for textbook writers, professionals in instructional design, and instructors.
Topics in Cognitive Science | 2011
Arthur C. Graesser; Danielle S. McNamara
The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.