Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matteo Valsecchi is active.

Publication


Featured researches published by Matteo Valsecchi.


Vision Research | 2007

Attention makes moving objects be perceived to move faster

Massimo Turatto; Massimo Vescovi; Matteo Valsecchi

Although it is well established that attention affects visual performance in many ways, by using a novel paradigm [Carrasco, M., Ling, S., & Read. S. (2004). Attention alters appearance. Nature Neuroscience, 7, 308-313.] it has recently been shown that attention can alter the perception of different properties of stationary stimuli (e.g., contrast, spatial frequency, gap size). However, it is not clear whether attention can also change the phenomenological appearance of moving stimuli, as to date psychophysical and neuro-imaging studies have specifically shown that attention affects the adaptability of the visual motion system. Here, in five experiments we demonstrated that attention effectively alters the perceived speed of moving stimuli, so that attended stimuli were judged as moving faster than less attended stimuli. However, our results suggest that this change in visual performance was not accompanied by a corresponding change in the phenomenological appearance of the speed of the moving stimulus.


Neuroreport | 2007

Microsaccades distinguish between global and local visual processing

Massimo Turatto; Matteo Valsecchi; Luigi Tamè; Elena Betta

Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Optimal sampling of visual information for lightness judgments

Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner

The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness.


Journal of Vision | 2014

Variations in daylight as a contextual cue for estimating season, time of day, and weather conditions

Jeroen J. M. Granzier; Matteo Valsecchi

Experience and experiments on human color constancy (i.e., Arend & Reeves, 1986; Craven & Foster, 1992) tell us that we are capable of judging the illumination. However, when asked to make a match of the illuminants color and brightness, human observers seem to be quite poor (Granzier, Brenner, & Smeets, 2009a). Here we investigate whether human observers use (rather than match) daylight for estimating ecologically important dimensions: time of year, time of day, and outdoor temperature. In the first three experiments we had our observers evaluate calibrated color images of an outdoor urban scene acquired throughout a year. Although some observers could estimate the month and the temperature, overall they were quite poor at judging the time of day. In particular, observers were not able to discriminate between morning and afternoon pictures even when they were allowed to compare multiple images captured on the same day (Experiment 3). However, observers could distinguish between midday and sunset and sunrise daylight. Classification analysis showed that, given a perfect knowledge of its variation, an ideal observer could have performed the task over chance only considering the average chromatic variation in the picture. Instead, our observers reported using shadows to detect the position of the sun in order to estimate the time of day. However, this information is highly unreliable without knowledge of the orientation of the scene. In Experiment 4 we used an LED chamber in order to present our observers with lights whose chromaticity and illuminance varied along the daylight locus, thus isolating the light cues from the sun position cue. We conclude that discriminating the slight variations in chromaticity and brightness, which potentially distinguish morning and afternoon illuminations, lies beyond the ability of human observers.


Attention Perception & Psychophysics | 2013

The speed and accuracy of material recognition in natural images

Christiane B. Wiebel; Matteo Valsecchi; Karl R. Gegenfurtner

We studied the time course of material categorization in natural images relative to superordinate and basic-level object categorization, using a backward-masking paradigm. We manipulated several low-level features of the images—including luminance, contrast, and color—to assess their potential contributions. The results showed that the speed of material categorization was roughly comparable to the speed of basic-level object categorization, but slower than that of superordinate object categorization. The performance seemed to be crucially mediated by low-level factors, with color leading to a solid increase in performance for material categorization. At longer presentation durations, material categorization was less accurate than both types of object categorization. Taken together, our results show that material categorization can be as fast as basic-level object categorization, but is less accurate.


PLOS ONE | 2013

Visual Working Memory Contents Bias Ambiguous Structure from Motion Perception

Lisa Scocchia; Matteo Valsecchi; Karl R. Gegenfurtner; Jochen Triesch

The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM) can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM.


Journal of Vision | 2013

Perceived numerosity is reduced in peripheral vision

Matteo Valsecchi; Matteo Toscani; Karl R. Gegenfurtner

In four experiments we investigated the perception of numerosity in the peripheral visual field. We found that the perceived numerosity of a peripheral cloud of dots was judged to be inferior to the one of a central cloud of dots, particularly when the dots were highly clustered. Blurring the stimuli accordingly to peripheral spatial frequency sensitivity did not abolish the effect and had little impact on numerosity judgments. In a dedicated control experiment we ruled out that the reduction in peripheral perceived numerosity is secondary to a reduction of perceived stimulus size. We suggest that visual crowding might be at the origin of the observed reduction in peripheral perceived numerosity, implying that numerosity could be partly estimated through the individuation of the elements populating the array.


Cognitive Neuroscience | 2012

Prominent reflexive eye-movement orienting associated with deafness

Davide Bottari; Matteo Valsecchi; Francesco Pavani

Profound deafness affects orienting of visual attention. Until now, research focused exclusively on covert attentional orienting, neglecting whether overt oculomotor behavior may also change in deaf people. Here we used the pro- and anti-saccade task to examine the relative contribution of reflexive and voluntary eye-movement control in profoundly deaf and hearing individuals. We observed a behavioral facilitation in reflexive compared to voluntary eye movements, indexed by faster saccade latencies and smaller error rates in pro- than anti-saccade trials, which was substantially larger in deaf than hearing participants. This provides the first evidence of plastic changes related to deafness in overt oculomotor behavior, and constitutes an ecologically relevant parallel to the modulations attributed to deafness in covert attention orienting. Our findings also have implications for designers of real and virtual environments for deaf people and reveal that experiments on deaf visual abilities must not ignore the prominent reflexive eye-movement orienting in this sensory-deprived population.


Philosophical Transactions of the Royal Society B | 2013

Selection of visual information for lightness judgements by eye movements

Matteo Toscani; Matteo Valsecchi; Karl R. Gegenfurtner

When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance.


Journal of Vision | 2013

Saccadic and smooth-pursuit eye movements during reading of drifting texts

Matteo Valsecchi; Karl R. Gegenfurtner; Alexander C. Schütz

Reading is a complex visuomotor behavior characterized by an alternation of fixations and saccadic eye movements. Despite the widespread use of drifting texts in various settings, very little is known about eye movements under these conditions. Here we investigated oculomotor behavior during reading of texts which were drifting horizontally or vertically at different speeds. Consistent with previous reports, drifting texts were read by an alternation of smooth-pursuit and saccadic eye movements. Detailed analysis revealed several interactions between smooth pursuit and saccades. On one side, the gain of smooth pursuit was increased after the execution of a saccade. On the other side, the peak velocity of saccades was reduced for the horizontally drifting text, in which saccades and pursuit were executed in opposite directions. In addition, we show that well-known findings from the reading of static texts extend to drifting text, such as the preferred viewing location, the inverted optimal viewing position, and the correlation between saccade amplitude and subsequent pursuit/fixation duration. In general, individual eye-movement parameters such as saccade amplitude and fixation/pursuit durations were correlated across self-paced reading of static text and time-constrained reading of static and drifting texts. These results show that findings from basic oculomotor research also apply to the reading of drifting texts. Similarly, basic reading principles apply to the reading of static and drifting texts in a similar way. This exemplifies the reading of drifting text as a visuomotor behavior which is influenced by low-level eye-movement control as well as by cognitive and linguistic processing.

Collaboration


Dive into the Matteo Valsecchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Chen

University of Giessen

View shared research outputs
Top Co-Authors

Avatar

Lisa Scocchia

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baptiste Caziot

State University of New York College of Optometry

View shared research outputs
Top Co-Authors

Avatar

Benjamin T. Backus

State University of New York College of Optometry

View shared research outputs
Top Co-Authors

Avatar

Jan J. Koenderink

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge