Lore Thaler
Durham University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lore Thaler.
Journal of Vision | 2007
James T. Todd; Lore Thaler; Tjeerd M. H. Dijkstra; Jan J. Koenderink; Astrid M. L. Kappers
Computational models for determining three-dimensional shape from texture based on local foreshortening or gradients of scaling are able to achieve accurate estimates of surface relief from an image when it is observed from the same visual angle with which it was photographed or rendered. These models produce conflicting predictions, however, when an image is viewed from a different visual angle. An experiment was performed to test these predictions, in which observers judged the apparent depth profiles of hyperbolic cylinders under a wide variety of conditions. The results reveal that the apparent patterns of relief from texture are systematically underestimated; convex surfaces appear to have greater depth than concave surfaces, large camera angles produce greater amounts of perceived depth than small camera angles, and the apparent depth-to-width ratio for a given image of a surface is greater for small viewing angles than for large viewing angles. Because these results are incompatible with all existing computational models, a new model is presented based on scaling contrast that can successfully account for all aspects of the data.
PLOS ONE | 2011
Lore Thaler; Stephen R. Arnott; Melvyn A. Goodale
Background A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds. Results When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets. Conclusions These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts.
Vision Research | 2013
Lore Thaler; Alexander C. Schütz; Melvyn A. Goodale; Karl R. Gegenfurtner
People can direct their gaze at a visual target for extended periods of time. Yet, even during fixation the eyes make small, involuntary movements (e.g. tremor, drift, and microsaccades). This can be a problem during experiments that require stable fixation. The shape of a fixation target can be easily manipulated in the context of many experimental paradigms. Thus, from a purely methodological point of view, it would be good to know if there was a particular shape of a fixation target that minimizes involuntary eye movements during fixation, because this shape could then be used in experiments that require stable fixation. Based on this methodological motivation, the current experiments tested if the shape of a fixation target can be used to reduce eye movements during fixation. In two separate experiments subjects directed their gaze at a fixation target for 17s on each trial. The shape of the fixation target varied from trial to trial and was drawn from a set of seven shapes, the use of which has been frequently reported in the literature. To determine stability of fixation we computed spatial dispersion and microsaccade rate. We found that only a target shape which looks like a combination of bulls eye and cross hair resulted in combined low dispersion and microsaccade rate. We recommend the combination of bulls eye and cross hair as fixation target shape for experiments that require stable fixation.
Neuropsychologia | 2013
Stephen R. Arnott; Lore Thaler; Jennifer L. Milne; Daniel Kish; Melvyn A. Goodale
We have previously reported that an early-blind echolocating individual (EB) showed robust occipital activation when he identified distant, silent objects based on echoes from his tongue clicks (Thaler, Arnott, & Goodale, 2011). In the present study we investigated the extent to which echolocation activation in EBs occipital cortex reflected general echolocation processing per se versus feature-specific processing. In the first experiment, echolocation audio sessions were captured with in-ear microphones in an anechoic chamber or hallway alcove as EB produced tongue clicks in front of a concave or flat object covered in aluminum foil or a cotton towel. All eight echolocation sessions (2 shapes×2 surface materials×2 environments) were then randomly presented to him during a sparse-temporal scanning fMRI session. While fMRI contrasts of chamber versus alcove-recorded echolocation stimuli underscored the importance of auditory cortex for extracting echo information, main task comparisons demonstrated a prominent role of occipital cortex in shape-specific echo processing in a manner consistent with latent, multisensory cortical specialization. Specifically, relative to surface composition judgments, shape judgments elicited greater BOLD activity in ventrolateral occipital areas and bilateral occipital pole. A second echolocation experiment involving shape judgments of objects located 20° to the left or right of straight ahead activated more rostral areas of EBs calcarine cortex relative to location judgments of those same objects and, as we previously reported, such calcarine activity was largest when the object was located in contralateral hemispace. Interestingly, other echolocating experts (i.e., a congenitally blind individual in Experiment 1, and a late blind individual in Experiment 2) did not show the same pattern of feature-specific echo-processing calcarine activity as EB, suggesting the possible significance of early visual experience and early echolocation training. Together, our findings indicate that the echolocation activation in EBs occipital cortex is feature-specific, and that these object representations appear to be organized in a topographic manner.
Vision Research | 2005
James T. Todd; Lore Thaler; Tjeerd M. H. Dijkstra
Observers judged the apparent signs and magnitudes of surface slant from monocular textured images of convex or concave dihedral angles with varying fields of view between 5 degrees C and 60 degrees C. The results revealed that increasing the field of view or the regularity of the surface texture produced large increases in the magnitude of the perceptual gain (i.e., the judged slant divided by the ground truth). Additional regression analyses also revealed that observers slant judgments were highly correlated with the range of texture densities (or spatial frequencies) in each display, which accounted for 96% of the variance among the different possible dihedral angles and fields of view.
Frontiers in Human Neuroscience | 2011
Lore Thaler; Melvyn A. Goodale
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements.
Attention Perception & Psychophysics | 2014
Jennifer L. Milne; Melvyn A. Goodale; Lore Thaler
Similar to certain bats and dolphins, some blind humans can use sound echoes to perceive their silent surroundings. By producing an auditory signal (e.g., a tongue click) and listening to the returning echoes, these individuals can obtain information about their environment, such as the size, distance, and density of objects. Past research has also hinted at the possibility that blind individuals may be able to use echolocation to gather information about 2-D surface shape, with definite results pending. Thus, here we investigated people’s ability to use echolocation to identify the 2-D shape (contour) of objects. We also investigated the role played by head movements—that is, exploratory movements of the head while echolocating—because anecdotal evidence suggests that head movements might be beneficial for shape identification. To this end, we compared the performance of six expert echolocators to that of ten blind nonecholocators and ten blindfolded sighted controls in a shape identification task, with and without head movements. We found that the expert echolocators could use echoes to determine the shapes of the objects with exceptional accuracy when they were allowed to make head movements, but that their performance dropped to chance level when they had to remain still. Neither blind nor blindfolded sighted controls performed above chance, regardless of head movements. Our results show not only that experts can use echolocation to successfully identify 2-D shape, but also that head movements made while echolocating are necessary for the correct identification of 2-D shape.
Frontiers in Systems Neuroscience | 2011
Goren Gordon; David M. Kaplan; Benjamin S. Lankow; Daniel Ying-Jeh Little; Jason Sherwin; Benjamin A. Suter; Lore Thaler
This article was motivated by the conference entitled “Perception & Action – An Interdisciplinary Approach to Cognitive Systems Theory,” which took place September 14–16, 2010 at the Santa Fe Institute, NM, USA. The goal of the conference was to bring together an interdisciplinary group of neuroscientists, roboticists, and theorists to discuss the extent and implications of action–perception integration in the brain. The motivation for the conference was the realization that it is a widespread approach in biological, theoretical, and computational neuroscience to investigate sensory and motor function of the brain in isolation from one another, while at the same time, it is generally appreciated that sensory and motor processing cannot be fully separated. Our article summarizes the key findings of the conference, provides a hypothetical model that integrates the major themes and concepts presented at the conference, and concludes with a perspective on future challenges in the field.
Neuroscience | 2009
Lore Thaler; James T. Todd
To perform visually guided hand movements the visuo-motor system transforms visual information into movement parameters, invoking both central and peripheral processes. Central visuo-motor processes are active in the CNS, whereas peripheral processes are active at the neuromuscular junction. A major share of research attention regarding central visuo-motor processes concerns the question which parameters the CNS controls to guide the hand from one point to another. Findings in the literature are inconsistent. Whereas some researchers suggest that the CNS controls the hand displacement vector, others suggest that it controls final hand position. The current paper introduces a paradigm and analysis method designed to identify the parameters that the CNS controls to guide the hand. We use simulations to validate our analysis in the presence of peripheral visuo-motor noise and to estimate the level of peripheral noise in our data. Using our new tools, we show that hand movements are controlled either in terms of the hand displacement vector or in terms of final hand position, depending on the way visual information relevant for movement production is specified. Interestingly, our new analysis method reveals a difference in central visuo-motor processes, even though a traditional analysis of movement endpoint distributions does not. We estimate the level of peripheral noise in our data to be less than or equal to 40%. Based on our results we conclude that the CNS is flexible with regard to the parameters it controls to guide the hand; that spatial distributions of movement endpoints are not necessarily indicative of central visuo-motor processes; and that both peripheral and central noise has to be carefully considered in the interpretation of movement data.
Wiley Interdisciplinary Reviews: Cognitive Science | 2016
Lore Thaler; Melvyn A. Goodale
Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as well-known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is. WIREs Cogn Sci 2016, 7:382-393. doi: 10.1002/wcs.1408 For further resources related to this article, please visit the WIREs website.