Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clara Suied is active.

Publication


Featured researches published by Clara Suied.


Journal of Experimental Psychology: Applied | 2008

Evaluating warning sound urgency with reaction times

Clara Suied; Patrick Susini; Stephen McAdams

It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a RT paradigm, with two different concurrent visuomotor tracking tasks simulating driving conditions. Experiments 1 and 2 show that RT decreases as interonset interval (IOI) decreases, where IOI is defined as the time elapsed from the onset of one sound pulse to the onset of the next. Experiment 3 shows that temporal irregularity between pulses can capture a listeners attention. These findings lead to concrete recommendations: IOI can be used to modulate warning sound urgency; and temporal irregularity can provoke an arousal effect in listeners. The authors also argue that the RT paradigm provides a useful tool for clarifying some of the factors involved in alarm processing.


tests and proofs | 2010

Bimodal perception of audio-visual material properties for virtual environments

Nicolas Bonneel; Clara Suied; Isabelle Viaud-Delmon; George Drettakis

High-quality rendering of both audio and visual material properties is very important in interactive virtual environments, since convincingly rendered materials increase realism and the sense of immersion. We studied how the level of detail of auditory and visual stimuli interact in the perception of audio-visual material rendering quality. Our study is based on perception of material discrimination, when varying the levels of detail of modal synthesis for sound, and bidirectional reflectance distribution functions for graphics. We performed an experiment for two different models (a Dragon and a Bunny model) and two material types (plastic and gold). The results show a significant interaction between auditory and visual level of detail in the perception of material similarity, when comparing approximate levels of detail to a high-quality audio-visual reference rendering. We show how this result can contribute to significant savings in computation time in an interactive audio-visual rendering system. To our knowledge, this is the first study that shows interaction of audio and graphics representation in a material perception task.


Experimental Brain Research | 2009

Integration of auditory and visual information in the recognition of realistic objects

Clara Suied; Nicolas Bonneel; Isabelle Viaud-Delmon

Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus–response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.


Journal of the Acoustical Society of America | 2012

Fast recognition of musical sounds based on timbre.

Trevor R. Agus; Clara Suied; Simon J. Thorpe; Daniel Pressnitzer

Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.


Frontiers in Human Neuroscience | 2011

Auditory scene analysis: The sweet music of ambiguity

Daniel Pressnitzer; Clara Suied; Shihab A. Shamma

In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.


international symposium on circuits and systems | 2010

Characteristics of human voice processing

Trevor R. Agus; Simon J. Thorpe; Clara Suied; Daniel Pressnitzer

As human listeners, it seems that we should be experts in processing vocal sounds. Here we present new behavioral data that confirm and quantify a voice-processing advantage in a range of natural sound recognition tasks. The experiments focus on time: the reaction-time for recognition, and the shortest sound segment required for recognition. Our behavioral results provide constraints on the features used by listeners to process voice sounds. Such features are likely to be jointly spectro-temporal, over multiple time scales.


Journal of the Acoustical Society of America | 2014

Auditory gist: Recognition of very short sounds from timbre cues

Clara Suied; Trevor R. Agus; Simon J. Thorpe; Nima Mesgarani; Daniel Pressnitzer

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.


PLOS ONE | 2009

Auditory-visual object recognition time suggests specific processing for animal sounds.

Clara Suied; Isabelle Viaud-Delmon

Background Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. Methodology/Findings We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. Conclusions/Significance These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.


Cyberpsychology, Behavior, and Social Networking | 2013

Auditory-Visual Virtual Reality as a Diagnostic and Therapeutic Tool for Cynophobia

Clara Suied; George Drettakis; Olivier Warusfel; Isabelle Viaud-Delmon

Traditionally, virtual reality (VR) exposure-based treatment concentrates primarily on the presentation of a high-fidelity visual experience. However, adequately combining the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention. We present the design and usability testing of an auditory-visual interactive environment for investigating VR exposure-based treatment for cynophobia. The specificity of our application involves 3D sound, allowing the presentation and spatial manipulations of a fearful stimulus in the auditory modality and in the visual modality. We conducted an evaluation test with 10 participants who fear dogs to assess the capacity of our auditory-visual virtual environment (VE) to generate fear reactions. The specific perceptual characteristics of the dog model that were implemented in the VE were highly arousing, suggesting that VR is a promising tool to treat cynophobia.


Journal of the Acoustical Society of America | 2010

Why are natural sounds detected faster than pips

Clara Suied; Patrick Susini; Steve Mcadams; Roy D. Patterson

Simple reaction times (RTs) were used to measure differences in processing time between natural animal sounds and artificial sounds. When the artificial stimuli were sequences of short tone pulses, the animal sounds were detected faster than the artificial sounds. The animal sounds were then compared with acoustically modified versions (white noise modulated by the temporal envelope of the animal sounds). No differences in RTs were observed between the animal sounds and their modified counterparts. These results show that the fast detection observed for natural sounds, in the present task, could be explained by their acoustic properties.

Collaboration


Dive into the Clara Suied's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor R. Agus

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Bonneel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge