Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin de Haas is active.

Publication


Featured researches published by Benjamin de Haas.


The Journal of Neuroscience | 2014

Larger Extrastriate Population Receptive Fields in Autism Spectrum Disorders

D. Samuel Schwarzkopf; Elaine J. Anderson; Benjamin de Haas; Sarah White; Geraint Rees

Previous behavioral research suggests enhanced local visual processing in individuals with autism spectrum disorders (ASDs). Here we used functional MRI and population receptive field (pRF) analysis to test whether the response selectivity of human visual cortex is atypical in individuals with high-functioning ASDs compared with neurotypical, demographically matched controls. For each voxel, we fitted a pRF model to fMRI signals measured while participants viewed flickering bar stimuli traversing the visual field. In most extrastriate regions, perifoveal pRFs were larger in the ASD group than in controls. We observed no differences in V1 or V3A. Differences in the hemodynamic response function, eye movements, or increased measurement noise could not account for these results; individuals with ASDs showed stronger, more reliable responses to visual stimulation. Interestingly, pRF sizes also correlated with individual differences in autistic traits but there were no correlations with behavioral measures of visual processing. Our findings thus suggest that visual cortex in ASDs is not characterized by sharper spatial selectivity. Instead, we speculate that visual cortical function in ASDs may be characterized by extrastriate cortical hyperexcitability or differential attentional deployment.


Frontiers in Human Neuroscience | 2012

Better ways to improve standards in brain-behavior correlation analysis.

Dietrich Samuel Schwarzkopf; Benjamin de Haas; Geraint Rees

Rousselet and Pernet (2012) demonstrate that outliers can skew Pearson correlation. They claim that this leads to widespread statistical errors by selecting and re-analyzing a cohort of published studies. However, they report neither the study identities nor inclusion criteria for this survey, so their claim cannot be independently replicated. Moreover, because their selection criteria are based on the authors’ belief that a study used misleading statistics, their study represents an example of “double dipping” (Kriegeskorte et al., 2009). The strong claims they make about the literature are therefore circular and unjustified by their data. Their purely statistical approach also does not consider the biological context of what observations constitute outliers. In discussion, they propose that the skipped correlation (Wilcox, 2005) is an appropriate alternative to the Pearson correlation that is robust to outliers. However, this test lacks statistical power to detect true relationships (Figure ​(Figure1A)1A) and is highly prone to false positives (Figure ​(Figure1B).1B). These factors conspire to drastically reduce the sensitivity of this test in comparison to other procedures (Appendix 1). Further, it is susceptible to the parameters chosen for the minimum covariance estimator to identify outliers but these parameters are not reported. Figure 1 Statistical power (A) and false positive rates (B) for four statistical tests and four sample sizes based on 10,000 simulations (see Appendix 1 for details). Outliers can drastically inflate false positives for Pearson correlation (note the difference ... Their argument fails to consider a broad literature on robust statistics, although an extensive review is outside the scope of this commentary. We limit ourselves instead to presenting a practical alternative to their approach: Shepherd’s pi correlation (http://www.fil.ion.ucl.ac.uk/~sschwarz/Shepherd.zip). We identify outliers by bootstrapping the Mahalanobis distance, Ds, of each observation from the bivariate mean and excluding all points whose average Ds is 6 or greater. Shepherd’s pi is Spearman’s rho but the p-statistic is doubled to account for outlier removal (Appendix 2). This compares very well in power (Figure ​(Figure1A)1A) to other tests and is more robust to the presence of influential outliers (Figure ​(Figure1B).1B). We replot the data Rousselet and Pernet presented in their Figure 2. The conclusions drawn from Shepherd’s pi are comparable to skipped correlation but less strict in situations where a relationship is likely (Figure ​(Figure1C,1C, Figures ​FiguresA1A1 and ​andA2A2 in Appendix). Consider for instance the data in Figure ​Figure1C-1.1C-1. Pearson and Spearman correlation applied to these data are comparable. This implies that the assumptions of Pearson’s r were probably met in this case. The skipped correlation (r’) does not reach significance but nevertheless shows a similar relationship, consistent with our demonstration above that it is too conservative a measure. Under Shepherd’s pi, however, the relationship between these variables is significant. Indeed, reflecting our intimate knowledge of these data (Schwarzkopf et al., 2011), we already know that the relationship studied here replicates for separate behavioral measures (see Schwarzkopf et al., 2011 SOM). A similar pattern was observed for other data, e.g., Figure ​Figure1C-2.1C-2. In some cases skipped correlation even removes the majority of data as outliers (e.g., their Figure 2E), which borders on the absurd. Rousselet and Pernet also claim that none of the studies that they surveyed considered the correlation coefficient and its confidence intervals. Cohen defined that 0.3 0.5, that is, at least 25% of the variance is explained. A correlation accounting for ~15% of variance is thus not particularly “modest” as they state. Naturally, this taxonomy is somewhat arbitrary but when relating complex cognitive functions to brain measures we are unlikely to find very high r, except for trivial relationships (Yarkoni, 2009). Their failure to find reported confidence intervals in the literature is also puzzling because it does not accurately report the published work they considered. For example, our study, reproduced in their Figure 2A, reported bootstrapped 95% confidence intervals in the figure (Schwarzkopf et al., 2011). They also do not consider important aspects of what confidence intervals reflect. Naturally, a confidence interval is an indicator of the certainty with which the effect size can be estimated. However, it depends on three factors: the strength of the correlation, the sample size, and the data distribution. Because Pearson correlation assumes a Gaussian distribution we can predict the confidence interval for any given r. If the bootstrapped confidence interval differs from this prediction, the data probably do not meet the assumptions. Rousselet and Pernet’s example for bivariate outliers (their Figure 1D) illustrates this: the predicted confidence interval for r = 0.49 with n = 17 should be (0.01, 0.79). However, the bootstrapped confidence interval for this example is (−0.19, 0.87), much wider and also overlapping zero. This indicates that outliers skew the correlation and that it should not be considered significant. Compare this to Figure ​Figure1C-11C-1 (their Figure 2A): the nominal confidence interval should be (−0.65, −0.02); the actual bootstrapped interval is very similar: (−0.67, −0.03). Therefore, the use of Pearson/Spearman correlation was justified here. We propose simple guidelines to follow when testing correlations. First, use Spearman’s rho because it captures non-linear relationships. Second, bootstrap confidence intervals. Third, if the interval differs from the nominal interval, apply Shepherd’s pi as a more robust test. Fourth, estimate the reliability of individual observations, especially in cases where outliers strongly affect results. Outliers are frequently the result of artifacts or measurement error. Our last point highlights an important general concern we have with Rousselet and Pernet’s argument. Statistical tests are important tools to be used by researchers for interpreting their data. However, the goal of neuroscience is to answer biologically relevant questions, not to produce statistically significant results. No statistical procedure can determine whether a biological question is valid or if a theory is sound. Rather, one has to inspect each finding and each data point in its own right, evaluating the data quality and the potential confounds on a case-by-case basis. Outliers should not be determined solely by statistical tests but must take into account biological interpretation (Bertolino, 2011; Schott and Duzel, 2011). And finally, there is only one way any finding can be considered truly significant; when upon repeated replication it passes the test of time.


Current Biology | 2014

Perceptual load affects spatial tuning of neuronal populations in human early visual cortex

Benjamin de Haas; D. Samuel Schwarzkopf; Elaine J. Anderson; Geraint Rees

Summary Withdrawal of attention from a visual scene as a result of perceptual load modulates overall levels of activity in human visual cortex [1], but its effects on cortical spatial tuning properties are unknown. Here we show attentional load at fixation affects the spatial tuning of population receptive fields (pRFs) in early visual cortex (V1–3) using functional magnetic resonance imaging (fMRI). We found that, compared to low perceptual load, high perceptual load yielded a ‘blurrier’ representation of the visual field surrounding the attended location and a centrifugal ‘repulsion’ of pRFs. Additional data and control analyses confirmed that these effects were neither due to changes in overall activity levels nor to eye movements. These findings suggest neural ‘tunnel vision’ as a form of distractor suppression under high perceptual load.


Frontiers in Psychology | 2011

Auditory Stimulus Timing Influences Perceived duration of Co-Occurring Visual Stimuli

Vincenzo Romei; Benjamin de Haas; Robert M. Mok; Jon Driver

There is increasing interest in multisensory influences upon sensory-specific judgments, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audio–visual congruent) or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audio–visual incongruent); or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled) than the flashes. Results showed that visual duration discrimination sensitivity (d′) was significantly higher for congruent (and significantly lower for incongruent) audio–visual synchronous combinations, relative to the visual-only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.


The Journal of Neuroscience | 2016

Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

Benjamin de Haas; D. Samuel Schwarzkopf; Iván Vila Álvarez; Rebecca P. Lawson; Linda Henriksson; Nikolaus Kriegeskorte; Geraint Rees

Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders.


Human Brain Mapping | 2016

Motor imagery of hand actions: Decoding the content of motor imagery from brain activity in frontal and parietal motor areas

Sebastian Pilgramm; Benjamin de Haas; Fabian Helm; Karen Zentgraf; Rudolf Stark; Jörn Munzert; Britta Krüger

How motor maps are organized while imagining actions is an intensely debated issue. It is particularly unclear whether motor imagery relies on action‐specific representations in premotor and posterior parietal cortices. This study tackled this issue by attempting to decode the content of motor imagery from spatial patterns of Blood Oxygen Level Dependent (BOLD) signals recorded in the frontoparietal motor imagery network. During fMRI‐scanning, 20 right‐handed volunteers worked on three experimental conditions and one baseline condition. In the experimental conditions, they had to imagine three different types of right‐hand actions: an aiming movement, an extension–flexion movement, and a squeezing movement. The identity of imagined actions was decoded from the spatial patterns of BOLD signals they evoked in premotor and posterior parietal cortices using multivoxel pattern analysis. Results showed that the content of motor imagery (i.e., the action type) could be decoded significantly above chance level from the spatial patterns of BOLD signals in both frontal (PMC, M1) and parietal areas (SPL, IPL, IPS). An exploratory searchlight analysis revealed significant clusters motor‐ and motor‐associated cortices, as well as in visual cortices. Hence, the data provide evidence that patterns of activity within premotor and posterior parietal cortex vary systematically with the specific type of hand action being imagined. Hum Brain Mapp 37:81–93, 2016.


PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES , 279 (1749) pp. 4955-4961. (2012) | 2012

Grey matter volume in early human visual cortex predicts proneness to the sound-induced flash illusion

Benjamin de Haas; Ryota Kanai; Lauri Jalkanen; Geraint Rees

Visual perception can be modulated by sounds. A drastic example of this is the sound-induced flash illusion: when a single flash is accompanied by two bleeps, it is sometimes perceived in an illusory fashion as two consecutive flashes. However, there are strong individual differences in proneness to this illusion. Some participants experience the illusion on almost every trial, whereas others almost never do. We investigated whether such individual differences in proneness to the sound-induced flash illusion were reflected in structural differences in brain regions whose activity is modulated by the illusion. We found that individual differences in proneness to the illusion were strongly and significantly correlated with local grey matter volume in early retinotopic visual cortex. Participants with smaller early visual cortices were more prone to the illusion. We propose that strength of auditory influences on visual perception is determined by individual differences in recurrent connections, cross-modal attention and/or optimal weighting of sensory channels.


NeuroImage | 2013

Auditory modulation of visual stimulus encoding in human retinotopic cortex

Benjamin de Haas; D. Samuel Schwarzkopf; Maren Urner; Geraint Rees

Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.


Nature Communications | 2016

Cortical idiosyncrasies predict the perception of object size

Christina Moutsiana; Benjamin de Haas; Andriani Papageorgiou; Jelle A. van Dijk; Annika Balraj; John A. Greenwood; D. Samuel Schwarzkopf

Perception is subjective. Even basic judgments, like those of visual object size, vary substantially between observers and also across the visual field within the same observer. The way in which the visual system determines the size of objects remains unclear, however. We hypothesize that object size is inferred from neuronal population activity in V1 and predict that idiosyncrasies in cortical functional architecture should therefore explain individual differences in size judgments. Here we show results from novel behavioural methods and functional magnetic resonance imaging (fMRI) demonstrating that biases in size perception are correlated with the spatial tuning of neuronal populations in healthy volunteers. To explain this relationship, we formulate a population read-out model that directly links the spatial distribution of V1 representations to our perceptual experience of visual size. Taken together, our results suggest that the individual perception of simple stimuli is warped by idiosyncrasies in visual cortical organization.


Frontiers in Human Neuroscience | 2015

Comparing different stimulus configurations for population receptive field mapping in human fMRI.

Iván Vila Álvarez; Benjamin de Haas; Chris A. Clark; Geraint Rees; D. Samuel Schwarzkopf

Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time.

Collaboration


Dive into the Benjamin de Haas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geraint Rees

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Driver

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Annika Balraj

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris A. Clark

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge