Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Whitney Tabor is active.

Publication


Featured researches published by Whitney Tabor.


Journal of Learning Disabilities | 2007

Speaking Up for Vocabulary Reading Skill Differences in Young Adults

David Braze; Whitney Tabor; Donald Shankweiler; W. Einar Mencl

This study is part of a broader project aimed at developing cognitive and neurocognitive profiles of adolescent and young adult readers whose educational and occupational prospects are constrained by their limited literacy skills. We explore the relationships among reading-related abilities in participants ages 16 to 24 years spanning a wide range of reading ability. Two specific questions are addressed: (a) Does the simple view of reading capture all nonrandom variation in reading comprehension? (b) Does orally assessed vocabulary knowledge account for variance in reading comprehension, as predicted by the lexical quality hypothesis? A comprehensive battery of cognitive and educational tests was employed to assess phonological awareness, decoding, verbal working memory, listening comprehension, reading comprehension, word knowledge, and experience with print. In this heterogeneous sample, decoding ability clearly played an important role in reading comprehension. The simple view of reading gave a reasonable fit to the data, although it did not capture all of the reliable variance in reading comprehension as predicted. Orally assessed vocabulary knowledge captured unique variance in reading comprehension even after listening comprehension and decoding skill were accounted for. We explore how a specific connectionist model of lexical representation and lexical access can account for these findings.


Language and Cognitive Processes | 1997

Parsing in a Dynamical System: An Attractor-based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing

Whitney Tabor; Cornell Juliano; Michael K. Tanenhaus

A dynamical systems approach to parsing is proposed in which syntactic hypotheses are associated with attractors in a metric space. These attractors have many of the properties of traditional syntactic categories, while at the same time encoding context-dependent, lexically specie c distinctions. Hypotheses motivated by the dynamical system theory were tested in four reading time experiments examining the interaction of simple lexica l frequencies, frequencies that are contingent on an environment deened by syntactic categories, and frequencies contingent on verb argument structure. The experiments documented a variety of contingent frequency effects that cut across traditional linguistic grains, each of which was predicted by the dynamical systems model. These effects were simulated in an implementation of the theory, employing a recurrent network trained from a corpus to construct metric representations and an algorithm implementing a gravitational dynamical system to model reading time as time to gravitate to an attractor.


Cognitive Science | 1999

Dynamical Models of Sentence Processing

Whitney Tabor; Michael K. Tanenhaus

We suggest that the theory of dynamical systems provides a revealing general framework for modeling the representations and mechanism underlying syntactic processing. We show how a particular dynamical model, the Visitation Set Gravitation model of Tabor, Juliano, and Tanenhaus (1997), develops syntactic representations and models a set of contingent frequency effects in parsing that are problematic for other models. We also present new simulations showing how the model accounts for semantic effects in parsing, and propose a new account of the distinction between syntactic and semantic incongruity. The results show how symbolic structures useful in parsing arise as emergent properties of connectionist dynamical systems.


Expert Systems | 2000

Fractal encoding of context‐free grammars in connectionist networks

Whitney Tabor

Connectionist network learning of context-free languages has so far been applied only to very simple cases and has often made use of an external stack. Learning complex context-free languages with a homogeneous neural mechanism looks like a much harder problem. The current paper takes a step toward solving this problem by analyzing context-free grammar computation (without addressing learning) in a class of analog computers called dynamical automata, which are naturally implemented in connectionist networks. The result is a widely applicable method of using fractal sets to organize infinite-state computations in a bounded state space. An appealing consequence is the development of parameter-space maps, which locate various complex computers in spatial relationships to one another. An example suggests that such a global perspective on the organization of the parameter space may be helpful for solving the hard problem of getting connectionist networks to learn complex grammars from examples.


Neuropsychologia | 2005

Behavioral and neurobiological effects of printed word repetition in lexical decision and naming.

Leonard Katz; Chang H. Lee; Whitney Tabor; Stephen J. Frost; W. Einar Mencl; Rebecca Sandak; Jay G. Rueckl; Kenneth R. Pugh

A series of experiments studied the effects of repetition of printed words on (1) lexical decision (LD) and naming (NAM) behavior and (2) concomitant brain activation. It was hypothesized that subword phonological analysis (assembly) would decrease with increasing word familiarity and the greater decrease would occur in LD, a task that is believed to be less dependent on assembly than naming. As a behavioral marker of assembly, we utilized the regularity effect (the difference in response latency between words with regular versus irregular spelling-sound correspondences). In addition to repetition, stimulus familiarity was manipulated by word frequency and case alternation. Both experiments revealed an initial latency disadvantage for low frequency irregular words suggesting that assembly is the dominant process in both tasks when items are unfamiliar. As items become more familiar with repetition, the regularity effect disappeared in LD but persisted in NAM. Brain activation patterns for repeated words that were observed in fMRI paralleled the behavioral studies in showing greater reductions in activity under lexical decision than naming for regions previously identified as involved in assembly.


Developmental Neuropsychology | 2008

Reading Differences and Brain: Cortical Integration of Speech and Print in Sentence Processing Varies With Reader Skill

Donald Shankweiler; W. Einar Mencl; David Braze; Whitney Tabor; Kenneth R. Pugh; Robert K. Fulbright

Functional magnetic resonance imaging (fMRI) was used to investigate the impact of literacy skills in young adults on the distribution of cerebral activity during comprehension of sentences in spoken and printed form. The aim was to discover where speech and print streams merge, and whether their convergence is affected by the level of reading skill. The results from different analyses all point to the conclusion that neural integration of sentence processing across speech and print varies positively with the readers skill. Further, they identify the inferior frontal region as the principal site of speech–print integration and a major focus of reading comprehension differences. The findings provide new evidence of the role of the inferior frontal region in supporting supramodal systems of linguistic representation.


Cognitive Science | 2011

Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

Anuenue Kukona; Whitney Tabor

The Visual World Paradigm (VWP) presents listeners with a challenging problem: They must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the VWP, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the VWP.


Ecological Psychology | 2002

The Value of Symbolic Computation

Whitney Tabor

Standard generative linguistic theory, which uses discrete symbolic models of cognition, has some strengths and weaknesses. It is strong on providing a network of outposts that make scientific travel in the jungles of natural language feasible. It is weak in that it currently depends on the elaborate and unformalized use of intuition to develop critical supporting assumptions about each data point. In this regard, it is not in a position to characterize natural language systems in the lawful terms that ecological psychologists strive for. Connectionist learning models offer some help: They define lawful relations between linguistic environments and language systems. But our understanding of them is currently weak, especially when it comes to natural language syntax. Fortunately, symbolic linguistic analysis can help connectionism if the two meet via dynamical systems theory. I discuss a case in point: Insights from linguistic explorations of natural language syntax appear to have identified information structures that are particularly relevant to understanding ecologically appealing but analytically mysterious connectionist learning models.


Cognitive Neurodynamics | 2009

A dynamical systems perspective on the relationship between symbolic and non-symbolic computation

Whitney Tabor

It has been claimed that connectionist (artificial neural network) models of language processing, which do not appear to employ “rules”, are doing something different in kind from classical symbol processing models, which treat “rules” as atoms (e.g., McClelland and Patterson in Trends Cogn Sci 6(11):465–472, 2002). This claim is hard to assess in the absence of careful, formal comparisons between the two approaches. This paper formally investigates the symbol-processing properties of simple dynamical systems called affine dynamical automata, which are close relatives of several recurrent connectionist models of language processing (e.g., Elman in Cogn Sci 14:179–211, 1990). In line with related work (Moore in Theor Comput Sci 201:99–136, 1998; Siegelmann in Neural networks and analog computation: beyond the Turing limit. Birkhäuser, Boston, 1999), the analysis shows that affine dynamical automata exhibit a range of symbol processing behaviors, some of which can be mirrored by various Turing machine devices, and others of which cannot be. On the assumption that the Turing machine framework is a good way to formalize the “computation” part of our understanding of classical symbol processing, this finding supports the view that there is a fundamental “incompatibility” between connectionist and classical models (see Fodor and Pylyshyn 1988; Smolensky in Behav Brain Sci 11(1):1–74, 1988; beim Graben in Mind Matter 2(2):29--51,2004b). Given the empirical successes of connectionist models, the more general, super-Turing framework is a preferable vantage point from which to consider cognitive phenomena. This vantage may give us insight into ill-formed as well as well-formed language behavior and shed light on important structural properties of learning processes.


IEEE Transactions on Neural Networks | 2003

Learning exponential state-growth languages by hill climbing

Whitney Tabor

Training recurrent neural networks on infinite state languages has been successful with languages in which the minimal number of machine states grows linearly with the length of the sentence, but has faired poorly with exponential state-growth languages. The new architecture learns several exponential state-growth languages in near perfect by hill climbing.

Collaboration


Dive into the Whitney Tabor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth R. Pugh

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Pyeong Whan Cho

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge