Ómar I. Jóhannesson
University of Iceland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ómar I. Jóhannesson.
Experimental Brain Research | 2012
Ómar I. Jóhannesson; Árni Gunnar Ásgeirsson; Árni Kristjánsson
There are numerous asymmetries in anatomy between the nasal and temporal hemiretinae, which have been connected to various asymmetries in behavioral performance. These include asymmetries in Vernier acuity, saccade selection, and attentional function, in addition to some evidence for latency differences for saccadic eye movements. There is also evidence for stronger retinotectal neural projection from the nasal than the temporal hemiretina. There is, accordingly, good reason to predict asymmetries in saccadic performance depending on which hemifield the saccade trigger stimuli are presented in, but the evidence on this is mixed. We tested for asymmetries in both saccade latency and landing point accuracy in a variety of different saccade tasks. We found no evidence for any asymmetries in saccade latency and only modest evidence for asymmetries in landing point accuracy. While this lack of asymmetry is surprising in light of previous findings of attentional asymmetries, it may reflect that cortical input to midbrain eye control centers mitigates any retinal and retinotectal asymmetry.
Attention Perception & Psychophysics | 2014
Árni Kristjánsson; Ómar I. Jóhannesson
Although response times (RTs) are the dependent measure of choice in the majority of studies of visual attention, changes in RTs can be hard to interpret. First, they are inherently ambiguous, since they may reflect a change in the central tendency or skew (or both) of a distribution. Second, RT measures may lack sensitivity, since meaningful changes in RT patterns may not be picked up if they reflect two or more processes having opposing influences on mean RTs. Here we describe RT distributions for repetition priming in visual search, fitting ex-Gaussian functions to RT distributions. We focus here on feature and conjunction search tasks, since priming effects in these tasks are often thought to reflect similar mechanisms. As expected, both tasks resulted in strong priming effects when target and distractor identities repeated, but a large difference between feature and conjunction search was also seen, in that the σ parameter (reflecting the standard deviation of the Gaussian component) was far more affected by search repetition in conjunction than in feature search. Although caution should clearly be used when particular parameter estimates are matched to specific functions or processes, our results suggest that analyses of RT distributions can inform theoretical accounts of priming in visual search tasks, in this case showing quite different repetition effects for the two differing search types, suggesting that priming in the two paradigms partly reflects different mechanisms.
Experimental Brain Research | 2013
Ómar I. Jóhannesson; Árni Kristjánsson
Saccadic peak velocities during monocular and binocular presentation were tested. While the main sequence linear increase in peak velocities as a function of saccade amplitude is well documented, our results provide demonstrations of violations of the main sequence. Peak velocities during monocular presentation were considerably higher toward temporal than nasal stimuli. This nasal–temporal asymmetry (NTA) was not explained by amplitude differences and was most pronounced for the lowest amplitudes tested, decreasing with increased amplitude. Under binocular presentation, this NTA was much smaller. While the exact reasons for this difference in peak velocities between hemifields are unclear at present, we propose that anatomical NTAs result in stronger signals from the nasal, than temporal retina leading to higher peak velocities into the temporal visual hemifield. NTAs in peak velocity are consistent with NTAs in attentional choice and in attentional function, which might also be explained by anatomical NTA.
I-perception | 2016
Ómar I. Jóhannesson; Ian M. Thornton; Irene Smith; Andrey Chetverikov; Árni Kristjánsson
A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100u2009ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.
international conference on computers helping people with special needs | 2016
Michal Bujacz; Karol Kropidlowski; Gabriel Ivanica; Alin Moldoveanu; Charalampos Saitis; Adam B. Csapo; György Wersényi; Simone Spagnol; Ómar I. Jóhannesson; Runar Unnthorsson; Mikolai Rotnicki; Piotr Witek
The paper summarizes a number of audio-related studies conducted by the Sound of Vision consortium, which focuses on the construction of a new prototype electronic travel aid for the blind. Different solutions for spatial audio were compared by testing sound localization accuracy in a number of setups, comparing plain stereo panning with generic and individual HRTFs, as well as testing different types of stereo headphones vs custom designed quadrophonic proximaural headphones. A number of proposed sonification approaches were tested by sighted and blind volunteers for accuracy and efficiency in representing simple virtual environments.
Brain Sciences | 2016
Ómar I. Jóhannesson; Oana Balan; Runar Unnthorsson; Alin Moldoveanu; Árni Kristjánsson
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects.
Behavior Research Methods | 2018
Mark Torrance; Guido Nottbusch; Rui Alves; Barbara Arfé; Lucile Chanquoy; Evgeny Chukharev-Hudilainen; Ioannis C. Dimakos; Raquel Fidalgo; Jukka Hyönä; Ómar I. Jóhannesson; George Madjarov; Dennis N. Pauly; Per Henning Uppstad; Luuk Van Waes; Michael Vernon; Åsa Wengelin
We describe the Multilanguage Written Picture Naming Dataset. This gives trial-level data and time and agreement norms for written naming of the 260 pictures of everyday objects that compose the colorized Snodgrass and Vanderwart picture set (Rossion & Pourtois in Perception, 33, 217–236, 2004). Adult participants gave keyboarded responses in their first language under controlled experimental conditions (N = 1,274, with subsamples responding in Bulgarian, Dutch, English, Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish). We measured the time to initiate a response (RT) and interkeypress intervals, and calculated measures of name and spelling agreement. There was a tendency across all languages for quicker RTs to pictures with higher familiarity, image agreement, and name frequency, and with higher name agreement. Effects of spelling agreement and effects on output rates after writing onset were present in some, but not all, languages. Written naming therefore shows name retrieval effects that are similar to those found in speech, but our findings suggest the need for cross-language comparisons as we seek to understand the orthographic retrieval and/or assembly processes that are specific to written output.
Experimental Brain Research | 2018
Ómar I. Jóhannesson; Jay A. Edelman; Bjarki Dalsgaard Sigurþórsson; Árni Kristjánsson
Express saccades have very short latencies and are often considered a special population of saccadic eye movements. Recent evidence suggests that express saccade generation in humans increases with training, and that this training is independent of the actual saccade vector being trained. We assessed the time course of these training-induced increases in express saccade generation and how they differ between the nasal and temporal hemifields, and second whether they transfer from the trained to the untrained eye. We also measured the effects of training on saccade latencies more generally, and upon peak velocities. The training effect transferred between the nasal and temporal hemifields and between the trained and untrained eyes. More surprisingly, we found an asymmetric effect of training on express saccade proportions: Before training, express saccade proportions were higher for saccades made into the nasal hemifield but with training this reversed. This training-induced asymmetry was also observed in overall saccade latencies, showing how training can unmask nasal/temporal asymmetries in saccade latencies. Finally, we report for the first time that saccadic peak velocities increased with training, independently of changes in amplitude.
Attention Perception & Psychophysics | 2018
Andrey Chetverikov; Maria Kuvaldina; W. Joseph MacInnes; Ómar I. Jóhannesson; Árni Kristjánsson
People often miss salient events that occur right in front of them. This phenomenon, known as change blindness, reveals the limits of visual awareness. Here, we investigate the role of implicit processing in change blindness using an approach that allows partial dissociation of covert and overt attention. Traditional gaze-contingent paradigms adapt the display in real time according to current gaze position. We compare such a paradigm with a newly designed mouse-contingent paradigm where the visual display changes according to the real-time location of a user-controlled mouse cursor, effectively allowing comparison of change detection with mainly overt attention (gaze-contingent display; Experiment 2) and untethered overt and covert attention (mouse-contingent display; Experiment 1). We investigate implicit indices of target detection during change blindness in eye movement and behavioral data, and test whether affective devaluation of unnoticed targets may contribute to change blindness. The results show that unnoticed targets are processed implicitly, but that the processing is shallower than if the target is consciously detected. Additionally, the partial untethering of covert attention with the mouse-contingent display changes the pattern of search and leads to faster detection of the changing target. Finally, although it remains possible that the deployment of covert attention is linked to implicit processing, the results fall short of establishing a direct connection.
Journal of Vision | 2015
Árni Kristjánsson; Ómar I. Jóhannesson; Andrey Chetverikov; Irene Smith; Ian M. Thornton
A popular model of the function of visual attentional involves visual search where a single target is to be found among multiple distractors. A more realistic model may, however, arguably involve search for multiple targets of various types in the same search environment, since our goals at any one time may not necessarily be so narrow as to involve a single target. Here we present results from a novel paradigm involving foraging for different target types according to varying constraints. We test such foraging both with a finger-foraging paradigm where observers must tap a pre-designated number of items and an analogous eye-gaze foraging task where observers cancel the items by fixating on them for 100 ms. Supporting previous reports we find a dramatic difference between feature and conjunction foraging for a majority of observers. A notable subset of observers has little trouble switching between different target types during foraging, however, even in a difficult foraging task where observes must forage for two targets among distractors that are defined by conjunctions of features. There a significant correspondence between eye and finger foraging. The observers that have little trouble switching between different target types during finger-foraging, also tend to have little trouble switching during gaze-foraging. This finding establishes an important connection between eye-gaze and finger-tapping, most likely reflecting the close correspondence between eye and hand coordination and visual attention. Finally, the fact that some observers can switch between different target types with relative ease raises challenges for many current theoretical accounts of vision and attention. Meeting abstract presented at VSS 2015.