J. Kevin O’Regan
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Kevin O’Regan.
Attention Perception & Psychophysics | 1989
Jonathan Grainger; J. Kevin O’Regan; Arthur M. Jacobs; Juan Segui
Psychologie Expérimentale, 28 rue Serpente, 75006 Paris, France. Current models of word recognition generally assume that word units orthographically similar to a stimulus word are involved in the visual recognition of this word. We refer to this set of orthographically similar words as an orthographic neighborhood. Two experiments are presented that investigate the ways in which the composition of this neighborhood can affect word recognition. The data indicate that the presence in the neighborhood of at least one unit of higher frequency than the stimulus word itself results in interference in stimulus word processing. Lexical decision latencies (Experiment 1) and gaze durations (Experiment 2) to words with one neighbor of higher frequency were significantly longer than to words without a more frequent neighbor.
Attention Perception & Psychophysics | 1995
Françoise Vitu; J. Kevin O’Regan; Albrecht W. Inhoff; Richard Topolski
The purpose of the present study was to compare the oculomotor behavior of readers scanning meaningful and meaningless materials. Four conditions were used—a normal-text-reading control condition, and three experimental conditions in which the amount of linguistic processing was reduced, either by presenting the subjects with repeated letter strings or by asking the subjects to search for a target letter in texts or letter strings. The results show that global eye-movement characteristics (such as saccade size and fixation duration), as well as local characteristics (such as word-skipping rate, landing site, refixation probability, and refixation position), are very similar in the four conditions. The finding that the eyes are capable of generating an autonomous oculomotor scanning strategy in the absence of any linguistic information to process argues in favor of the idea that such predetermined oculomotor strategies might be an important determinant of eye movements in reading.
Attention Perception & Psychophysics | 1980
J. Kevin O’Regan
This experiment investigates how information about six letter spaces to the right of the current fixation point is used to guide the eye in reading. It is found that low-quality cues such as word length can be extracted from this region sufficiently quickly to influence the size of the immediately following saccade. Linguistic processing of information from this region is also done, but only begins to influence the eye’s behavior at the next fixation point, where fixation duration is affected. Subsequent eye-movement characteristics are more strongly influenced, but this influence is diffuse, that is, spread over a variety of eye-movement parameters, and takes about 1 sec to develop.
Attention Perception & Psychophysics | 1992
Jonathan Grainger; J. Kevin O’Regan; Arthur M. Jacobs; Juan Segui
Two experiments are described that measured lexical decision latencies and errors to five-letter French words with a single higher frequency orthographic neighbor and control words with no higher frequency neighbors. The higher frequency neighbor differed from the stimulus word by either the second letter (e.g.,astre-autre) or the fourth letter (chope-chose). Neighborhood frequency effects were found to interact with this factor, and significant interference was observed only tochope-type words. The effects of neighborhood frequency were also found to interact with the position of initial fixation in the stimulus word (either the second letter or the fourth letter). Interference was greatly reduced when the initial fixation was on the critical disambiguating letter (i.e., the letterp inchope). Moreover, word recognition was improved when subjects initially fixated the second letter relative to when they initially fixated the fourth letter of a five-letter word, but this second-letter advantage practically disappeared when the stimulus differed from a more frequent word by its fourth letter. The results are interpreted in terms of the interaction between visual and lexical factors in visual word recognition.
Memory & Cognition | 1998
Tatjana A. Nazir; Arthur M. Jacobs; J. Kevin O’Regan
Word recognition performance varies systematically as a function of where the eyes fixate in the word. Performance is maximal with the eye slightly left of the center of the word and decreases drastically to both sides of thisoptimal viewing position. While manipulations of lexical factors have only marginal effects on this phenomenon, previous studies have pointed to a relation between the viewing position effect (VPE) and letter legibility: When letter legibility drops, the VPE becomes more exaggerated. To further investigate this phenomenon, we improved letter legibility by magnifying letter size in a way that was proportional to the distance from fixation (e.g., TABLE). Contrary to what would be expected if the VPE were due to limits of acuity, improving the legibility of letters has only a restricted influence on performance. In particular, for long words, a strong VPE remains even when letter legibility is equalized across eccentricities. The failure to neutralize the VPE is interpreted in terms of perceptual learning: Since normally, because of acuity limitations, the only information available in parafoveal vision concerns low-resolution features of letters; even when magnification provides better information, readers are unable to make use of it.
Bulletin of the psychonomic society | 1991
Tatjana A. Nazir; J. Kevin O’Regan; Arthur M. Jacobs
In a recent article, McConkie et al. (1989) proposed that, due to the rapid drop-off of visual acuity, the amount of visual information available from a word is maximal when the eye fixates the middle of the word, and decreases on both sides of this optimal viewing position with each letter of deviation. However, data on perceptual span have demonstrated that during a fixation, more letters are utilized to the right than to the left of the fixation point, which would predict that the optimal viewing position should be left of center. In a letter discrimination task, the ratio of the left/right asymmetry was determined and the probability of correct word recognition as a function of fixation location in the word was estimated. The predictions were highly compatible with empirical results on word recognition.
Attention Perception & Psychophysics | 1983
J. Kevin O’Regan; Ariane Lévy-Schoen; Arthur M. Jacobs
The effect of visibility on eye-movement parameters was investigated. As a measure of visibility, the notion of “visual span” was introduced. A first experiment, in which a simple letter-recognition task was used, directly measured the changes in visual span produced by changing viewing distance and character spacing. The results of this experiment were used as a reference for a second experiment in which the same visibility changes were made, but in which subjects read short texts while their eye movements were monitored. Saccade sizes were affected not primarily by visual span, but by other factors, possibly related to word boundary detection or linguistic processing. Fixation durations appeared to be strongly affected by the proximity of the letters to the subject’s acuity threshold.
Journal of Vision | 2005
Juan R. Vidal; Hélène L. Gauchou; Catherine Tallon-Baudry; J. Kevin O’Regan
Over the past 20 years, storage of visual items in visual short-term memory has been extensively studied by many research groups. In addition to questions concerning the format of object storage is a more global question that focuses on the organization of information in visual short-term memory. In a series of experiments we investigated how relations across visual items determined the accessibility of individual item information. This relational information seems to be very strong within the store devoted to each feature dimension. We also investigated the role of selective attention on the storage of relational information. The experiments suggest a broadening of the parallel store model of visual short-term memory proposed by M. E. Wheeler and A. M. Treisman (2002) to include the notion of what we call structural gist.
Neural Networks | 2018
Alban Laflaquière; J. Kevin O’Regan; Bruno Gas; Alexander V. Terekhov
In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensors external spatial configuration.
european workshop on visual information processing | 2016
Alban Flachot; Edoardo Provenzi; J. Kevin O’Regan
Philipona & ORegan (2006) [1] recently proposed a linear model of surface reflectance as it is sensed by the human eyes. In their model, the tristimulus response to reflected light is accurately approximated by a linear transformation of the tristimulus response to illumination, allowing the prediction of several perceptual characteristics of human vision. Later, Vazquez-Corral et al (2012) [2] built a bridge between Philipona & ORegans model and von Kries-like approaches to color constancy in computer vision by showing that the linear operators could be diagonalized in a common basis. However both of these studies required specifying a particular dataset of illuminants. We will show in this paper that it is possible to compute adequate linear operators and a common basis for diagonalization without specifying any particular set of illuminants.