Ivan Vankov
New Bulgarian University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ivan Vankov.
Quarterly Journal of Experimental Psychology | 2014
Ivan Vankov; Jeffrey S. Bowers; Marcus R. Munafò
Cohen’s classic study on statistical power (Cohen, 1962) showed that studies in the 1960 volume of the Journal of Abnormal and Social Psychology lacked sufficient power to detect anything other than large effects (r ~ 0.60). Sedlmeier and Gigerenzer (Sedlmeier & Gigerenzer, 1989) conducted a similar analysis on studies in the 1984 volume and found that, if anything, the situation had worsened. Recently, Button and colleagues showed that the average power of neuroscience studies is probably around 20% (Button et al., 2013b). Clearly repeated exhortations that researchers should “pay attention to the power of their tests rather than … focus exclusively on the level of significance” (Sedlmeier & Gigerenzer, 1989) have failed. Here we consider why this might be so. One reason might be a lack of appreciation of the importance of statistical power within a null hypothesis significance testing (NHST) framework. NHST grew out of the distinct statistical theories of Fisher (Fisher, 1955), and Neyman and Pearson (Rucci & Tweney, 1980). From Fisher we take the concept of null hypothesis testing, and from Neyman-Pearson the concepts of Type I (α) and Type II error (β). Power is a concept arising from Neyman-Pearson theory, and reflects the likelihood of correctly rejecting the null hypothesis (i.e., 1- β). However, the hybrid statistical theory typically used leans most heavily on Fisher’s concept of null hypothesis testing. Sedlmeier and Gigerenzer (1989) argued that a lack of understanding of these distinctions partly explained the lack of consideration of statistical power; while we (nominally, at least) adhere to a 5% Type I error rate, we pay little attention to the Type II error rate, despite the need to consider both when evaluating whether a research finding is likely to be true (Button et al., 2013b). Another reason might be the incentive structures within which scientists operate. Scientists are human and therefore will respond (consciously or unconsciously) to incentives; when personal success (e.g., promotion) is associated with the quality and (critically) the quantity of publications produced, it makes more sense to use finite resources to generate as many publications as possible. A single transformative study in a highly-regarded journal might confer the most prestige, but this is a high-risk strategy – the experiment may not produce the desired (i.e., publishable) results, or the journal may not accept it for publication (Sekercioglu, 2013). A safer strategy might be to “salami-slice” one’s resources to generate more studies which, with sufficient analytical flexibility (Simmons, Nelson, & Simonsohn, 2011), will almost certainly produce a number of publishable studies (Sullivan, 2007). There is some support for the second reason. Studies published in some countries may over-estimate true effects more than those published in other countries (Fanelli & Ioannidis, 2013; Munafo, Attwood, & Flint, 2008). This may be because, in certain countries, publication in even medium-rank journals confers substantial direct financial rewards on the authors (Shao & Shen, 2011), which may in turn be related to over-estimates of true effects (Pan, Trikalinos, Kavvoura, Lau, & Ioannidis, 2005). Authors may therefore (consciously or unconsciously) conduct a larger number of smaller studies, which are still likely to generate publishable findings, rather than risk investing their limited resources in a smaller number of larger studies. However, to the best of our knowledge, the first possible reason has not been systematically explored. We therefore surveyed studies published recently in a high-ranking psychology journal, and contacted authors to establish the rationale used for deciding sample size (see Supplementary Material). This indicated that approximately one third held beliefs that would serve, on average, to reduce statistical power (see Table 1). In particular, they used accepted norms within their area of research to decide on sample size, in the belief that this would be sufficient to replicate previous results (and therefore, presumably, to identify new findings). Given empirical evidence for a high prevalence of findings close to the p = 0.05 threshold (Masicampo & Lalande, 2012), this belief is likely to be unwarranted. If an experiment finds an effect with p ~ 0.05, and we assume the effect size observed is accurate, then if we repeat the experiment with the same sample size we will on average replicate that finding only 50% of the time (see Supplementary Material). In reality, power will be much lower than 50% because the effect size estimate observed in the original estimate is probably an over-estimate (Simonsohn, 2013). However, in our survey, over one third of respondents inaccurately believed that in this scenario the finding would replicate over 80% of the time (see Supplementary Material). Table 1 Beliefs about sample size and statistical power. There are unlikely to be simple solutions to the continued lack of appreciation of statistical power. One reason for pessimism, as we have shown, is that these concerns are not new; occasional discussion of these issues has not led to any lasting change. Structural change may be required, including more rigorous enforcement by journals and editors of guidelines which are often found in instructions for authors, but not always followed. Recently, Nature introduced a submission checklist for life sciences articles, which includes a requirement that sample size be justified (http://www.nature.com/authors/policies/checklist.pdf). Other journals are introducing novel submission formats which place greater emphasis on study design (including statistical power), rather than results, including Registered Reports at Cortex, and Registered Replication Reports at Perspectives on Psychological Science. The poor reproducibility of scientific findings continues to be a cause of major concern. Small studies with low statistical power contribute to this problem (Bertamini & Munafo, 2012; Button et al., 2013b), and arguments in defence of “small-scale science” (Quinlan, 2013) overlook the fact that larger studies protect against inferences from trivial effect sizes by allowing a better estimation of the magnitude of true effects (Button et al., 2013a). Reasons to resist NHST, and in particular the dichotomous interpretation of p-values, have been well-rehearsed (Sterne & Davey Smith, 2001), and alternative approaches, such as focusing on effect size estimation or implementing Bayesian approaches do exist. However, while NHST remains the dominant model for statistical inference, we should ensure that it is appropriately used.
Cognition | 2016
Jeffrey S. Bowers; Ivan Vankov; Markus F. Damian; Colin J. Davis
Why do some neurons in hippocampus and cortex respond to information in a highly selective manner? It has been hypothesized that neurons in hippocampus encode information in a highly selective manner in order to support fast learning without catastrophic interference, and that neurons in cortex encode information in a highly selective manner in order to co-activate multiple items in short-term memory (STM) without suffering a superposition catastrophe. However, the latter hypothesis is at odds with the widespread view that neural coding in the cortex is highly distributed in order to support generalization. We report a series of simulations that characterize the conditions in which recurrent Parallel Distributed Processing (PDP) models of immediate serial can recall novel words. We found that these models learned localist codes when they succeeded in generalizing to novel words. That is, just as fast learning may explain selective coding in hippocampus, STM and generalization may help explain the existence of selective codes in cortex.
Acta Psychologica | 2013
Ivan Vankov; Boicho Kokinov
Numerous behavioral and neuro-imaging studies have demonstrated that the motor system is activated when people are presented with manipulable objects. However it remains a matter of debate whether these results should be interpreted as evidence that certain conceptual processes employ motor programs. In order to address this issue, we conducted two experiments which required participants to assess the functions of tool-like objects and respond verbally. The results demonstrate that action affordances may constrain performance in tasks which are not based on the stimulus-response compatibility paradigm. We argue that this finding supports the causal role of the motor system in conceptual processing and that it cannot be explained by spreading of activation and response interference.
Cognitive Processing | 2015
Armina Janyan; Ivan Vankov; Oksana V. Tsaregorodtseva; Alex A. Miklashevsky
Previous studies show that eye movement trajectory curves away from a remembered visual location if a saccade needs to be made in the same direction as the location. Data suggest that part of the process of maintaining the location in working memory is the mental simulation of that location, so that the oculomotor system treats the remembered location as a real one. Other research suggests that word meaning may also behave like a ‘real object’ in space. The current study aimed to combine the two streams of research examining the effect of word meaning on the memory of a dot location. The results of two experiments showed that word meaning for ‘up’ (but not ‘down’) modulated both eye movement trajectory and location recognition time. Thus, mental simulation of task-irrelevant space-related word meaning affected both earlier stages of memory processes (maintenance of the location in the working memory) and later ones (location recognition).
Psychonomic Bulletin & Review | 2016
Jeffrey S. Bowers; Ivan Vankov; Casimir J. H. Ludwig
The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved “online,” such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.
Frontiers in Psychology | 2018
David Trafimow; Valentin Amrhein; Corson N. Areshenkoff; Carlos Barrera-Causil; Eric J. Beh; Yusuf K. Bilgic; Roser Bono; Michael T. Bradley; William M. Briggs; Héctor A. Cepeda-Freyre; Sergio E. Chaigneau; Daniel R. Ciocca; Juan Carlos Correa; Denis Cousineau; Michiel R. de Boer; Subhra Sankar Dhar; Igor Dolgov; Juana Gómez-Benito; Marian Grendar; James W. Grice; Martin E. Guerrero-Gimenez; Andrés Gutiérrez; Tania B. Huedo-Medina; Klaus Jaffe; Armina Janyan; Ali Karimnezhad; Fränzi Korner-Nievergelt; Koji Kosugi; Martin Lachmair; Rubén Ledesma
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Language, cognition and neuroscience | 2017
Ivan Vankov; Jeffrey S. Bowers
ABSTRACT The Parallel Distributed Processing (PDP) approach to cognitive modelling assumes that knowledge is distributed across multiple processing units. This view is typically justified on the basis of the computational advantages and biological plausibility of distributed representations. However, both these assumptions have been challenged. First, there is growing evidence that some neurons respond to information in a highly selective manner. Second, it has been demonstrated that localist representations are better suited for certain computational tasks. In this paper, we continue this line of research by investigating whether localist representations are learned in tasks involving arbitrary input–output mappings. The results imply that the pressure to learn local codes in such tasks is weak, but still there are conditions under which feed-forward PDP networks learn localist representation. Our findings further challenge the assumption that PDP modelling always goes hand in hand with distributed representations and provide directions for future research.
Attention Perception & Psychophysics | 2011
Ivan Vankov
The visual indexing theory proposed by Zenon Pylyshyn (Cognition, 32, 65–97, 1989) predicts that visual attention mechanisms are employed when mental images are projected onto a visual scene. Recent eye-tracking studies have supported this hypothesis by showing that people tend to look at empty places where requested information has been previously presented. However, it has remained unclear to what extent this behavior is related to memory performance. The aim of the present study was to explore whether the manipulation of spatial attention can facilitate memory retrieval. In two experiments, participants were asked first to memorize a set of four objects and then to determine whether a probe word referred to any of the objects. The results of both experiments indicate that memory accuracy is not affected by the current focus of attention and that all the effects of directing attention to specific locations on response times can be explained in terms of stimulus–stimulus and stimulus–response spatial compatibility.
Psychological Review | 2014
Jeffrey S. Bowers; Ivan Vankov; Markus F. Damian; Colin J. Davis
Cognitive Science | 2011
Georgi Petkov; Ivan Vankov; Boicho Kokinov