Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron Clarke is active.

Publication


Featured researches published by Aaron Clarke.


Journal of Vision | 2014

Is there a common factor for vision

Céline Cappe; Aaron Clarke; Christine Mohr; Michael H. Herzog

In cognition, common factors play a crucial role. For example, different types of intelligence are highly correlated, pointing to a common factor, which is often called g. One might expect that a similar common factor would also exist for vision. Surprisingly, no one in the field has addressed this issue. Here, we provide the first evidence that there is no common factor for vision. We tested 40 healthy students’ performance in six basic visual paradigms: visual acuity, vernier discrimination,two visual backward masking paradigms, Gabor detection, and bisection discrimination. One might expect that performance levels on these tasks would be highly correlated because some individuals generally have better vision than others due to superior optics,better retinal or cortical processing, or enriched visual experience. However, only four out of 15 correlations were significant, two of which were nontrivial. These results cannot be explained by high intraobserver variability or ceiling effects because test–retest reliability was high and the variance in our student population is commensurate with that from other studies with well sighted populations. Using a variety of tests (e.g., principal components analysis, Bayes theorem, test–retest reliability), we show the robustness of our null results. We suggest that neuroplasticity operates during everyday experience to generate marked individual differences. Our results apply only to the normally sighted population (i.e., restricted range sampling). For the entire population, including those with degenerate vision, we expect different results.


Vision Research | 2011

Hemifield Asymmetry in the Potency of Exogenous Auditory and Visual Cues

Yamaya Sosa; Aaron Clarke; Mark E. McCourt

Neurologically normal subjects misperceive the midpoints of lines (PSE) as reliably leftward of veridical center, a phenomenon known as pseudoneglect. This leftward bias reflects the dominance of the right cerebral hemisphere in deploying spatial attention. Transient visual cues, delivered to either the left or right endpoints of lines, modulate PSE such that leftward biases are increased by leftward cues, and are decreased by rightward cues, relative to a no-cue control condition. We ask whether lateralized auditory cues can similarly influence PSE in a tachistoscopic visual line bisection task, and describe how visual and auditory cues, in spatially synergistic or antagonistic combinations, jointly influence PSE. Our results demonstrate that whereas auditory and visual cues both modulate PSE, visual cues are overall more potent than auditory cues. Visual and auditory cues are weighted such that visual cues are significantly more potent than auditory cues when visual cues are delivered to left hemispace. Visual and auditory cues are equipotent when visual cues are delivered to right hemispace. These results are consistent with the existence of independent lateralized networks governing the deployment of visuospatial and audiospatial attention. An analysis of the weighting of unisensory visual and auditory cues which optimally predicts PSE in multisensory cue conditions shows that cues combine additively. There was no evidence for a superadditive multisensory cue combination.


Vision Research | 2016

A computational model for reference-frame synthesis with applications to motion perception.

Aaron Clarke; Haluk Ogmen; Michael H. Herzog

As discovered by the Gestaltists, in particular by Duncker, we often perceive motion to be within a non-retinotopic reference frame. For example, the motion of a reflector on a bicycle appears to be circular, whereas, it traces out a cycloidal path with respect to external world coordinates. The reflector motion appears to be circular because the human brain subtracts the horizontal motion of the bicycle from the reflector motion. The bicycle serves as a reference frame for the reflector motion. Here, we present a general mathematical framework, based on vector fields, to explain non-retinotopic motion processing. Using four types of non-retinotopic motion paradigms, we show how the theory works in detail. For example, we show how non-retinotopic motion in the Ternus-Pikler display can be computed.


Frontiers in Psychology | 2014

Visual crowding illustrates the inadequacy of local vs. global and feedforward vs. feedback distinctions in modeling visual perception

Aaron Clarke; Michael H. Herzog; Gregory Francis

Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a systems computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena.


Vision Research | 2017

About individual differences in vision

Lukasz Grzeczkowski; Aaron Clarke; Gregory Francis; Fred W. Mast; Michael H. Herzog

ABSTRACT In cognition, audition, and somatosensation, performance strongly correlates between different paradigms, which suggests the existence of common factors. In contrast, visual performance in seemingly very similar tasks, such as visual and bisection acuity, are hardly related, i.e., pairwise correlations between performance levels are low even though test‐retest reliability is high. Here we show similar results for visual illusions. Consistent with previous findings, we found significant correlations between the illusion magnitude of the Ebbinghaus and Ponzo illusions, but this relationship was the only significant correlation out of 15 further comparisons. Similarly, we found a significant link for the Ponzo illusion with both mental imagery and cognitive disorganization. However, most other correlations between illusions and personality were not significant. The findings suggest that vision is highly specific, i.e., there is no common factor. While this proposal does not exclude strong and stable associations between certain illusions and between certain illusions and personality traits, these associations seem to be the exception rather than the rule.


Frontiers in Computational Neuroscience | 2014

Why vision is not both hierarchical and feedforward

Michael H. Herzog; Aaron Clarke

In classical models of object recognition, first, basic features (e.g., edges and lines) are analyzed by independent filters that mimic the receptive field profiles of V1 neurons. In a feedforward fashion, the outputs of these filters are fed to filters at the next processing stage, pooling information across several filters from the previous level, and so forth at subsequent processing stages. Low-level processing determines high-level processing. Information lost on lower stages is irretrievably lost. Models of this type have proven to be very successful in many fields of vision, but have failed to explain object recognition in general. Here, we present experiments that, first, show that, similar to demonstrations from the Gestaltists, figural aspects determine low-level processing (as much as the other way around). Second, performance on a single element depends on all the other elements in the visual scene. Small changes in the overall configuration can lead to large changes in performance. Third, grouping of elements is key. Only if we know how elements group across the entire visual field, can we determine performance on individual elements, i.e., challenging the classical stereotypical filtering approach, which is at the very heart of most vision models.


Journal of Vision | 2013

Does spatio-temporal filtering account for nonretinotopic motion perception? Comment on Pooresmaeili, Cicchini, Morrone, and Burr (2012)

Aaron Clarke; Marc Repnow; Haluk Ogmen; Michael H. Herzog

Keywords: Ternus-Pikler display ; retinotopic processing ; nonretinotopic processing ; spatio-temporal filters Reference EPFL-ARTICLE-188506doi:10.1167/13.10.19View record in Web of Science Record created on 2013-09-13, modified on 2016-08-09


PLOS ONE | 2015

Human and Machine Learning in Non-Markovian Decision Making

Aaron Clarke; Johannes Friedrich; Elisa M. Tartaglia; Silvia Marchesotti; Walter Senn; Michael H. Herzog

Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model’s performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.


Journal of Vision | 2016

Motion-based nearest vector metric for reference frame selection in the perception of motion.

Mehmet N. Agaoglu; Aaron Clarke; Michael H. Herzog; Haluk Ogmen

We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arcs angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.


Journal of Vision | 2016

What crowding can tell us about object representations

Mauro Manassi; Sophie Lonchampt; Aaron Clarke; Michael H. Herzog

In crowding, perception of a target usually deteriorates when flanking elements are presented next to the target. Surprisingly, adding further flankers can lead to a release from crowding. In previous work we showed that, for example, vernier offset discrimination at 9° of eccentricity deteriorated when a vernier was embedded in a square. Adding further squares improved performance. The more squares presented, the better the performance, extending across 20° of the visual field. Here, we show that very similar results hold true for shapes other than squares, including unfamiliar, irregular shapes. Hence, uncrowding is not restricted to simple and familiar shapes. Our results provoke the question of whether any type of shape is represented at any location in the visual field. Moreover, small changes in the orientation of the flanking shapes led to strong increases in crowding strength. Hence, highly specific shape-specific interactions across large parts of the visual field determine vernier acuity.

Collaboration


Dive into the Aaron Clarke's collaboration.

Top Co-Authors

Avatar

Michael H. Herzog

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lukasz Grzeczkowski

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stéphane Rainville

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Albulena Shaqiri

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauro Manassi

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge