Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean Lorenceau is active.

Publication


Featured researches published by Jean Lorenceau.


Vision Research | 1992

The influence of terminators on motion integration across space

Jean Lorenceau; Maggie Shiffrar

Individual motion measurements are inherently ambiguous since the component of motion parallel to a homogeneous translating edge cannot be measured. Numerous models have proposed that the visual system solves this ambiguity through the integration of motion measurements across disparate contours. To examine this proposal, subjects observed a translating diamond through four stationary apertures. Since the diamonds motion could not be determined from any single contour, motion integration across contours was required to determine the diamonds direction of motion. We demonstrate that observers have difficulty accurately integrating motion information across space. Performance improved when the diamond stimulus was presented at 7 degrees eccentricity, through jagged apertures, or at low contrast. Taken together, these results imply that integration across space is more likely when the motion of contour terminators is less salient or reliable.


Vision Research | 1993

Different motion sensitive units are involved in recovering the direction of moving lines

Jean Lorenceau; Maggie Shiffrar; Nora Wells; Eric Castet

We studied direction discrimination for lines moving obliquely relative to their orientation. Manipulating contrast, length and duration of motion, we found systematic errors in direction discrimination at low contrast, long length and/or short durations. These errors can be accounted for by a competition between ambiguous velocity signals originating from contour motion processing units and signals from line terminator processing units. The dynamic of this competition can be described by a simple model involving two different classes of processing units with different contrast thresholds, different integration time constants and different levels of response saturation.


Nature Neuroscience | 2001

Form constraints in motion binding

Jean Lorenceau; David Alais

Visual analyses of form and motion proceed along parallel streams. Unified perception of moving forms requires interactions between these streams, although whether the interactions occur early or late in cortical processing remains unresolved. Using rotating outlined shapes sampled through apertures, we showed that binding local motions into global object motion depends strongly on spatial configuration. Identical local motion components are perceived coherently when they define closed configurations, but usually not when they define open configurations. Our experiments show this influence arises in early cortical levels and operates as a form-based veto of motion integration in the absence of closure.


Vision Research | 1993

Perceived speed of moving lines depends on orientation, length, speed and luminance

Eric Castet; Jean Lorenceau; Maggie Shiffrar; Claude Bonnet

In this study, the perceived speed of a tilted line translating horizontally (for a duration of 167 msec) is evaluated with respect to a vertical line undergoing the same translation. Perceived speed of the oblique line is shown to be underestimated when compared to the vertical line. This bias increases: (1) when the line is further tilted, (2) with greater line lengths, (3) with lower contrasts, and finally (4) with a speed of 2.1 deg/sec as compared to a higher speed of 4.2 deg/sec. These results may be accounted for by considering that two velocity signals are used by the visual system to estimate the speed of the line: the translation of this line (this signal does not depend on the lines orientation) and the motion component normal to the line (this signal depends on orientation). We suggest that these two signals are encoded by different types of units and that the translation signal is specifically extracted at the line endings. We further suggest that these signals are integrated by a weighted average process according to their perceptual salience. Other interpretations are considered at the light of current models dealing with the two-dimensional integration of different velocity signals.


Vision Research | 2002

Orientation dependent modulation of apparent speed: a model based on the dynamics of feed-forward and horizontal connectivity in V1 cortex

Peggy Seriès; Sébastien Georges; Jean Lorenceau; Yves Frégnac

Psychophysical and physiological studies suggest that long-range horizontal connections in primary visual cortex participate in spatial integration and contour processing. Until recently, little attention has been paid to their intrinsic temporal properties. Recent physiological studies indicate, however, that the propagation of activity through long-range horizontal connections is slow, with time scales comparable to the perceptual scales involved in motion processing. Using a simple model of V1 connectivity, we explore some of the implications of this slow dynamics. The model predicts that V1 responses to a stimulus in the receptive field can be modulated by a previous stimulation, a few milliseconds to a few tens of milliseconds before, in the surround. We analyze this phenomenon and its possible consequences on speed perception, as a function of the spatio-temporal configuration of the visual inputs (relative orientation, spatial separation, temporal interval between the elements, sequence speed). We show that the dynamical interactions between feed-forward and horizontal signals in V1 can explain why the perceived speed of fast apparent motion sequences strongly depends on the orientation of their elements relative to the motion axis and can account for the range of speed for which this perceptual effect occurs (Georges, Seriès, Frégnac and Lorenceau, this issue).


Vision Research | 1995

Motion Integration Across Differing Image Features

Maggie Shiffrar; Xiaojun Li; Jean Lorenceau

To interpret the projected image of a moving object, the visual system must integrate motion signals across different image regions. Traditionally, researchers have examined this process by focusing on the integration of equally ambiguous motion signals. However, when the motions of complex, multi-featured images are measured through spatially limited receptive fields, the resulting motion measurements have varying degrees of ambiguity. In a series of experiments, we examine how human observers interpret images containing motion signals of differing degrees of ambiguity. Subjects judged the perceived coherence of images consisting of an ambiguously translating grating and an unambiguously translating random dot pattern. Perceived coherence of the dotted grating depended upon the degree of concurrence between the velocities of the grating terminators and dots. Depth relationships also played a critical role in the motion integration process. When terminators were suppressed with occlusion cues, coherence increased. When dots and gratings were presented at different depth planes, coherence decreased. We use these results to outline the conditions under which the visual system uses unambiguous motion signals to interpret object motion.


Journal of Physiology-paris | 2004

Crossmodal integration for perception and action

Christophe Lalanne; Jean Lorenceau

The integration of information from different sensory modalities has many advantages for human observers, including increase of salience, resolution of perceptual ambiguities, and unified perception of objects and surroundings. Several behavioral, electrophysiological and neuroimaging data collected in various tasks, including localization and detection of spatial events, crossmodal perception of object properties and scene analysis are reviewed here. All the results highlight the multiple faces of crossmodal interactions and provide converging evidence that the brain takes advantages of spatial and temporal coincidence between spatial events in the crossmodal binding of spatial features gathered through different modalities. Furthermore, the elaboration of a multimodal percept appears to be based on an adaptive combination of the contribution of each modality, according to the intrinsic reliability of sensory cue, which itself depends on the task at hand and the kind of perceptual cues involved in sensory processing. Computational models based on bayesian sensory estimation provide valuable explanations of the way perceptual system could perform such crossmodal integration. Recent anatomical evidence suggest that crossmodal interactions affect early stages of sensory processing, and could be mediated through a dynamic recurrent network involving backprojections from multimodal areas as well as lateral connections that can modulate the activity of primary sensory cortices, though future behavioral and neurophysiological studies should allow a better understanding of the underlying mechanisms.


Journal of Experimental Psychology: Human Perception and Performance | 1995

AUTOMATIC ACCESS TO OBJECT IDENTITY: ATTENTION TO GLOBAL INFORMATION, NOT TO PARTICULAR PHYSICAL DIMENSIONS, IS IMPORTANT

Muriel Boucart; Glyn W. Humphreys; Jean Lorenceau

The authors examined whether, by attending to physical properties of objects, participants can prevent the activation of semantic information. Participants received a reference object followed by a display containing both a matching target and a distractor. In Experiments 1 and 2, participants attended to motion and to surface texture, respectively. Some evidence for the processing of semantic information occurred. This result contrasted with a previous study in which no evidence for semantic information processing was apparent in a color matching task (M. Boucart & G.W. Humphreys, 1994). In Experiment 3, pictures were used with outline contours composed of randomly distributed red and green dots, one color being overrepresented. Participants matched pictures according to the dominant color. Evidence for semantic processing emerged. The authors suggest that these results cannot be explained in terms of attention operating differently on separate physiological channels. Instead it is proposed that what is crucial in activating stored object representations is whether the global configuration of the picture is processed.


Perception | 1990

Apparent brightness enhancement in the Kanizsa square with and without illusory contour formation

Birgitta Dresp; Jean Lorenceau; Claude Bonnet

The perceived strength of darkness enhancement in the centre of surfaces surrounded or not surrounded by illusory contours was investigated as a function of proximity of the constituent elements of the display and their angular size. Magnitude estimation was used to measure the perception of the darkness phenomenon in white-on-grey stimuli. Darkness enhancement was perceived in both types of the stimuli used, but more strongly in the presence of illusory contours. In both cases, perceived darkness enhancement increased with increasing proximity of the constituent parts of the display and with their angular size. These results suggest that the occurrence of darkness (or brightness) enhancement phenomena in the centre of the displays is not directly related to illusory contour formation.


Vision Research | 1987

Recovery from contrast adaptation: Effects of spatial and temporal frequency

Jean Lorenceau

The time-course of the recovery from adaptation to drifting gratings was estimated as a function of the spatio-temporal characteristics of the stimulus. A new method was used, in which the response latencies for the detection of contrasts presented during the recovery were measured. An exponential function provides a good description of the recovery. The initial (i.e. at the beginning of the recovery period) and asymptotic values of this function depend on the temporal frequency but not on the spatial frequency of the adapting stimulus. The time constants increase with high spatial frequency and follow a U-shaped function of the temporal frequency of the adapting stimulus.

Collaboration


Dive into the Jean Lorenceau's collaboration.

Top Co-Authors

Avatar

Yves Frégnac

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrei Gorea

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Anne-Lise Paradis

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Sébastien Georges

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Christophe Lalanne

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Claude Bonnet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Frederic Benmussa

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Frédéric Chavane

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge