Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shravan Vasishth is active.

Publication


Featured researches published by Shravan Vasishth.


Trends in Cognitive Sciences | 2006

Computational principles of working memory in sentence comprehension

Richard L. Lewis; Shravan Vasishth; Julie A. Van Dyke

Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.


Journal of Memory and Language | 2017

Balancing Type I error and power in linear mixed models

Hannes Matuschek; Reinhold Kliegl; Shravan Vasishth; Harald Baayen; Douglas M. Bates

Linear mixed-effects models have increasingly replaced mixed-model analyses of variance for statistical inference in factorial psycholinguistic experiments. Although LMMs have many advantages over ANOVA, like ANOVAs, setting them up for data analysis also requires some care. One simple option, when numerically possible, is to fit the full variance-covariance structure of random effects (the maximal model; Barr et al. 2013), presumably to keep Type I error down to the nominal alpha in the presence of random effects. Although it is true that fitting a model with only random intercepts may lead to higher Type I error, fitting a maximal model also has a cost: it can lead to a significant loss of power. We demonstrate this with simulations and suggest that for typical psychological and psycholinguistic data, higher power is achieved without inflating Type I error rate if a model selection criterion is used to select a random effect structure that is supported by the data.


PLOS ONE | 2013

Processing Chinese relative clauses: evidence for the subject-relative advantage.

Shravan Vasishth; Zhong Chen; Qiang Li; Gueilan Guo

A general fact about language is that subject relative clauses are easier to process than object relative clauses. Recently, several self-paced reading studies have presented surprising evidence that object relatives in Chinese are easier to process than subject relatives. We carried out three self-paced reading experiments that attempted to replicate these results. Two of our three studies found a subject-relative preference, and the third study found an object-relative advantage. Using a random effects bayesian meta-analysis of fifteen studies (including our own), we show that the overall current evidence for the subject-relative advantage is quite strong (approximate posterior probability of a subject-relative advantage given the data: 78–80%). We argue that retrieval/integration based accounts would have difficulty explaining all three experimental results. These findings are important because they narrow the theoretical space by limiting the role of an important class of explanation—retrieval/integration cost—at least for relative clause processing in Chinese.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2011

In Search of On-Line Locality Effects in Sentence Comprehension

Brian Bartek; Richard L. Lewis; Shravan Vasishth; Mason R. Smith

Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing.


Cognitive Neurodynamics | 2008

Towards dynamical system models of language-related brain potentials

Peter beim Graben; Sabrina Gerth; Shravan Vasishth

Event-related brain potentials (ERP) are important neural correlates of cognitive processes. In the domain of language processing, the N400 and P600 reflect lexical-semantic integration and syntactic processing problems, respectively. We suggest an interpretation of these markers in terms of dynamical system theory and present two nonlinear dynamical models for syntactic computations where different processing strategies correspond to functionally different regions in the system’s phase space.


arXiv: Methodology | 2016

Bayesian linear mixed models using Stan: A tutorial for psychologists, linguists, and cognitive scientists

Tanner Sorensen; Sven Hohenstein; Shravan Vasishth

With the arrival of the R packages nlme and lme4, linear mixed models (LMMs) have come to be widely used in experimentally-driven areas like psychology, linguistics, and cognitive science. This tutorial provides a practical introduction to fitting LMMs in a Bayesian framework using the probabilistic programming language Stan. We choose Stan (rather than WinBUGS or JAGS) because it provides an elegant and scalable framework for fitting models in most of the standard applications of LMMs. We ease the reader into fitting increasingly complex LMMs, first using a two-condition repeated measures self-paced reading study, followed by a more complex


Language and Cognitive Processes | 2010

Short-term forgetting in sentence comprehension: Crosslinguistic evidence from verb-final structures

Shravan Vasishth; Katja Suckow; Richard L. Lewis; Sabine Kern

2\times 2


Language and Linguistics Compass | 2016

Statistical methods for linguistic research: Foundational Ideas—Part II

Bruno Nicenboim; Shravan Vasishth

repeated measures factorial design that can be generalized to much more complex designs.


Language and Cognitive Processes | 2013

Scanpaths reveal syntactic underspecification and reanalysis strategies

Titus von der Malsburg; Shravan Vasishth

Seven experiments using self-paced reading and eyetracking suggest that omitting the middle verb in a double centre embedding leads to easier processing in English but leads to greater difficulty in German. One commonly accepted explanation for the English pattern—based on data from offline acceptability ratings and due to Gibson and Thomas (1999)—is that working-memory overload leads the comprehender to forget the prediction of the upcoming verb phrase (VP), which reduces working-memory load. We show that this VP-forgetting hypothesis does an excellent job of explaining the English data, but cannot account for the German results. We argue that the English and German results can be explained by the parsers adaptation to the grammatical properties of the languages; in contrast to English, German subordinate clauses always have the verb in clause-final position, and this property of German may lead the German parser to maintain predictions of upcoming VPs more robustly compared to English. The evidence thus argues against language-independent forgetting effects in online sentence processing; working-memory constraints can be conditioned by countervailing influences deriving from grammatical properties of the language under study.


Cognitive Science | 2016

Cross‐Linguistic Differences in Processing Double‐Embedded Relative Clauses: Working‐Memory Constraints or Language Statistics?

Stefan L. Frank; Thijs Trompenaars; Shravan Vasishth

We provide an introductory review of Bayesian data analytical methods, with a focus on applications for linguistics, psychology, psycholinguistics, and cognitive science. The empirically oriented researcher will benefit from making Bayesian methods part of their statistical toolkit due to the many advantages of this framework, among them easier interpretation of results relative to research hypotheses, and flexible model specification. We present an informal introduction to the foundational ideas behind Bayesian data analysis, using, as an example, a linear mixed models analysis of data from a typical psycholinguistics experiment. We discuss hypothesis testing using the Bayes factor, and model selection using cross-validation. We close with some examples illustrating the flexibility of model specification in the Bayesian framework. Suggestions for further reading are also provided.

Collaboration


Dive into the Shravan Vasishth's collaboration.

Researchain Logo
Decentralizing Knowledge