Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc J.-M. Macé is active.

Publication


Featured researches published by Marc J.-M. Macé.


Vision Research | 2005

The time course of visual processing: Backward masking and natural scene categorisation

Nadège Bacon-Macé; Marc J.-M. Macé; Michèle Fabre-Thorpe; Simon J. Thorpe

Human observers are very good at deciding whether briefly flashed novel images contain an animal and previous work has shown that the underlying visual processing can be performed in under 150 ms. Here we used a masking paradigm to determine how information accumulates over time during such high-level categorisation tasks. As the delay between test image and mask is increased, both behavioural accuracy and differential ERP amplitude rapidly increase to reach asymptotic levels around 40-60 ms. Such results imply that processing at each stage in the visual system is remarkably rapid, with information accumulating almost continuously following the onset of activation.


Journal of Vision | 2003

Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes

Guillaume A. Rousselet; Marc J.-M. Macé; Michèle Fabre-Thorpe

Object categorization can be extremely fast. But among all objects, human faces might hold a special status that could depend on a specialized module. Visual processing could thus be faster for faces than for any other kind of object. Moreover, because face processing might rely on facial configuration, it could be more disrupted by stimulus inversion. Here we report two experiments that compared the rapid categorization of human faces and animals or animal faces in the context of upright and inverted natural scenes. In Experiment 1, the natural scenes contained human faces and animals in a full range of scales from close-up to far views. In Experiment 2, targets were restricted to close-ups of human faces and animal faces. Both experiments revealed the remarkable object processing efficiency of our visual system and further showed (1) virtually no advantage for faces over animals; (2) very little performance impairment with inversion; and (3) greater sensitivity of faces to inversion. These results are interpreted within the framework of a unique system for object processing in the ventral pathway. In this system, evidence would accumulate very quickly and efficiently to categorize visual objects, without involving a face module or a mental rotation mechanism. It is further suggested that rapid object categorization in natural scenes might not rely on high-level features but rather on features of intermediate complexity.


PLOS ONE | 2009

The Time-Course of Visual Categorizations: You Spot the Animal Faster than the Bird

Marc J.-M. Macé; Olivier Joubert; Jean-Luc Nespoulous; Michèle Fabre-Thorpe

Background Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair…) are accessed faster than superordinate ones (animal, furniture…). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this “basic level advantage”. Principal Findings Using the same task, we compared human processing speed when categorizing natural scenes as containing either an animal (superordinate level), or a specific animal (bird or dog, basic level). Human subjects require an additional 40–65 ms to decide whether an animal is a bird or a dog and most errors are induced by non-target animals. Indeed, processing time is tightly linked with the type of non-targets objects. Without any exemplar of the same superordinate category to ignore, the basic level category is accessed as fast as the superordinate category, whereas the presence of animal non-targets induces both an increase in reaction time and a decrease in accuracy. Conclusions and Significance These results support the parallel distributed processing theory (PDP) and might reconciliate controversial studies recently published. The visual system can quickly access a coarse/abstract visual representation that allows fast decision for superordinate categorization of objects but additional time-consuming visual analysis would be necessary for a decision at the basic level based on more detailed representations.


Journal of Vision | 2004

Animal and human faces in natural scenes: how specific to human faces is the N170 ERP component?

Guillaume A. Rousselet; Marc J.-M. Macé; Michèle Fabre-Thorpe

The N170 is an event-related potential component reported to be very sensitive to human face stimuli. This study investigated the specificity of the N170, as well as its sensitivity to inversion and task status when subjects had to categorize either human or animal faces in the context of upright and inverted natural scenes. A conspicuous N170 was recorded for both face categories. Pictures of animal faces were associated with a N170 of similar amplitude compared to pictures of human faces, but with delayed peak latency. Picture inversion enhanced N170 amplitude for human faces and delayed its peak for both human and animal faces. Finally, whether processed as targets or non-targets, depending on the task, both human and animal face N170 were identical. Thus, human faces in natural scenes elicit a clear but non-specific N170 that is not modulated by task status. What appears to be specific to human faces is the strength of the inversion effect.


European Journal of Neuroscience | 2005

Rapid categorization of achromatic natural scenes: how robust at very low contrasts?

Marc J.-M. Macé; Simon J. Thorpe; Michèle Fabre-Thorpe

The human visual system is remarkably good at categorizing objects even in challenging visual conditions. Here we specifically assessed the robustness of the visual system in the face of large contrast variations in a high‐level categorization task using natural images. Human subjects performed a go/no‐go animal/nonanimal categorization task with briefly flashed grey level images. Performance was analysed for a large range of contrast conditions randomly presented to the subjects and varying from normal to 3% of initial contrast. Accuracy was very robust and subjects were performing well above chance level (≈ 70% correct) with only 10–12% of initial contrast. Accuracy decreased with contrast reduction but reached chance level only in the most extreme condition (3% of initial contrast). Conversely, the maximal increase in mean reaction time was ≈ 60 ms (at 8% of initial contrast); it then remained stable with further contrast reductions. Associated ERPs recorded on correct target and distractor trials showed a clear differential effect whose amplitude and peak latency were correlated respectively with task accuracy and mean reaction times. These data show the strong robustness of the visual system in object categorization at very low contrast. They suggest that magnocellular information could play a role in ventral stream visual functions such as object recognition. Performance may rely on early object representations which lack the details provided subsequently by the parvocellular system but contain enough information to reach decision in the categorization task.


Journal of Cognitive Neuroscience | 2007

Limits of Event-related Potential Differences in Tracking Object Processing Speed

Guillaume A. Rousselet; Marc J.-M. Macé; Simon J. Thorpe; Michèle Fabre-Thorpe

We report results from two experiments in which subjects had to categorize briefly presented upright or inverted natural scenes. In the first experiment, subjects decided whether images contained animals or human faces presented at different scales. Behavioral results showed virtually identical processing speed between the two categories and very limited effects of inversion. One type of event-related potential (ERP) comparison, potentially capturing low-level physical differences, showed large effects with onsets at about 150 msec in the animal task. However, in the human face task, those differences started as early as 100 msec. In the second experiment, subjects responded to close-up views of animal faces or human faces in an attempt to limit physical differences between image sets. This manipulation almost completely eliminated small differences before 100 msec in both tasks. But again, despite very similar behavioral performances and short reaction times in both tasks, human faces were associated with earlier ERP differences compared with animal faces. Finally, in both experiments, as an alternative way to determine processing speed, we compared the ERP with the same images when seen as targets and nontargets in different tasks. Surprisingly, all task-dependent ERP differences had relatively long latencies. We conclude that task-dependent ERP differences fail to capture object processing speed, at least for some categories like faces. We discuss models of object processing that might explain our results, as well as alternative approaches.


conference on computers and accessibility | 2009

Assistive device for the blind based on object recognition: an application to identify currency bills

Rémi Parlouar; Florian Dramas; Marc J.-M. Macé; Christophe Jouffrais

We have developed a real-time portable object recognition system based on a bio-inspired image analysis software to increase blind people autonomy by localizing and identifying surrounding objects. A working prototype of this system has been tested on the issue of currency bill recognition encountered by most of the blind people. Seven blind persons were involved in an experiment which demonstrated that the usability of the system was good enough for such a device to be used daily in real-life situations.


Neuroreport | 2004

Spatiotemporal analyses of the N170 for human faces, animal faces and objects in natural scenes

Guillaume A. Rousselet; Marc J.-M. Macé; Michèle Fabre-Thorpe

We assessed the specificity to human faces of the N170 ERP component in the context of natural scenes. Subjects categorized photographs containing human faces, animal faces and various objects. Spatiotemporal topography analyses were performed on the individual ERP data. ERPs elicited by animal faces were similar to human faces ERPs but with a delayed face activity. In the N170 time window, ERPs to human and animal faces had a different topography compared with object ERPs. Such data suggest that N170 generators might process various stimuli with a coarse facial organization and show the care that must be taken in comparing scalp signal to faces and other objects as they are probably generated, at least partially, by different cortical sources.


Neuroreport | 2005

Rapid categorization of natural scenes in monkeys: target predictability and processing speed

Marc J.-M. Macé; Ghislaine Richard; Arnaud Delorme; Michèle Fabre-Thorpe

Three monkeys performed a categorization task and a recognition task with briefly flashed natural images, using in alternation either a large variety of familiar target images (animal or food) or a single (totally predictable) target. The processing time was 20 ms shorter in the recognition task in which false alarms showed that monkeys relied on low-level cues (color, form, orientation, etc.). The 20-ms additional delay necessary in monkeys to perform the categorization task is compared with the 40-ms delay previously found for humans performing similar tasks. With such short additional processing time, it is argued that neither monkeys nor humans have time to develop a fully integrated object representation in the categorization task and must rely on coarse intermediate representations.


human factors in computing systems | 2016

Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users

Julie Ducasse; Marc J.-M. Macé; Marcos Serrano; Christophe Jouffrais

Maps are essential in everyday life, but inherently inaccessible to visually impaired users. They must be transcribed to non-editable tactile graphics, or rendered on very expensive shape changing displays. To tackle these issues, we developed a tangible tabletop interface that enables visually impaired users to build tangible maps on their own, using a new type of physical icon called Tangible Reels. Tangible Reels are composed of a sucker pad that ensures stability, with a retractable reel that renders digital lines tangible. In order to construct a map, audio instructions guide the user to precisely place Tangible Reels onto the table and create links between them. During subsequent exploration, the device provides the names of the points and lines that the user touches. A pre-study confirmed that Tangible Reels are stable and easy to manipulate, and that visually impaired users can understand maps that are built with them. A follow-up experiment validated that the designed system, including non-visual interactions, enables visually impaired participants to quickly build and explore maps of various complexities.

Collaboration


Dive into the Marc J.-M. Macé's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon J. Thorpe

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnaud Delorme

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge