Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Pressnitzer is active.

Publication


Featured researches published by Daniel Pressnitzer.


Current Biology | 2006

Temporal Dynamics of Auditory and Visual Bistability Reveal Common Principles of Perceptual Organization

Daniel Pressnitzer; Jean-Michel Hupé

When dealing with natural scenes, sensory systems have to process an often messy and ambiguous flow of information. A stable perceptual organization nevertheless has to be achieved in order to guide behavior. The neural mechanisms involved can be highlighted by intrinsically ambiguous situations. In such cases, bistable perception occurs: distinct interpretations of the unchanging stimulus alternate spontaneously in the mind of the observer. Bistable stimuli have been used extensively for more than two centuries to study visual perception. Here we demonstrate that bistable perception also occurs in the auditory modality. We compared the temporal dynamics of percept alternations observed during auditory streaming with those observed for visual plaids and the susceptibilities of both modalities to volitional control. Strong similarities indicate that auditory and visual alternations share common principles of perceptual bistability. The absence of correlation across modalities for subject-specific biases, however, suggests that these common principles are implemented at least partly independently across sensory modalities. We propose that visual and auditory perceptual organization could rely on distributed but functionally similar neural competition mechanisms aimed at resolving sensory ambiguities.


Journal of the Acoustical Society of America | 2001

The lower limit of melodic pitch

Daniel Pressnitzer; Roy D. Patterson; Katrin Krumbholz

An objective melody task was used to determine the lower limit of melodic pitch (LLMP) for harmonic complex tones. The LLMP was defined operationally as the repetition rate below which listeners could no longer recognize that one of the notes in a four-note, chromatic melody had changed by a semitone. In the first experiment, the stimuli were broadband tones with all their components in cosine phase, and the LLMP was found to be around 30 Hz. In the second experiment, the tones were filtered into bands about 1 kHz in width to determine the influence of frequency region on the LLMP. The results showed that whenever there was energy present below 800 Hz, the LLMP was still around 30 Hz. When the energy was limited to higher-frequency regions, however, the LLMP increased progressively, up to 270 Hz when the energy was restricted to the region above 3.2 kHz. In the third experiment, the phase relationship between spectral components was altered to determine whether the shape of the waveform affects the LLMP. When the envelope peak factor was reduced using the Schroeder phase relationship, the LLMP was not affected. When a secondary peak was introduced into the envelope of the stimuli by alternating the phase of successive components between two fixed values, there was a substantial reduction in the LLMP, for stimuli containing low-frequency energy. A computational auditory model that extracts pitch information with autocorrelation can reproduce all of the observed effects, provided the contribution of longer time intervals is progressively reduced by a linear weighting function that limits the mechanism to time intervals of less than about 33 ms.


Journal of the Acoustical Society of America | 2000

The lower limit of pitch as determined by rate discrimination

Katrin Krumbholz; Roy D. Patterson; Daniel Pressnitzer

This paper is concerned with the lower limit of pitch for complex, harmonic sounds, like the notes produced by low-pitched musical instruments. The lower limit of pitch is investigated by measuring rate discrimination thresholds for harmonic tones filtered into 1.2-kHz-wide bands with a lower cutoff frequency, F(c), ranging from 0.2 to 6.4 kHz. When F(c) is below 1 kHz and the harmonics are in cosine phase, rate discrimination threshold exhibits a rapid, tenfold decrease as the repetition rate is increased from 16 to 64 Hz, and over this range, the perceptual quality of the stimuli changes from flutter to pitch. When F(c) is increased above 1 kHz, the slope of the transition from high to low thresholds becomes shallower and occurs at progressively higher rates. A quantitative comparison of the cosine-phase thresholds with subjective estimates of the existence region of pitch from the literature shows that the transition in rate discrimination occurs at approximately the same rate as the lower limit of pitch. The rate discrimination experiment was then repeated with alternating-phase harmonic tones whose envelopes repeat at twice the repetition rate of the waveform. In this case, when F(c) is below 1 kHz, the transition in rate discrimination is shifted downward by almost an octave relative to the transition in the cosine-phase thresholds. The results support the hypothesis that in the low-frequency region, the pitch limit is determined by a temporal mechanism, which analyzes time intervals between peaks in the neural activity pattern. It seems that temporal processing of pitch is limited to time intervals less than 33 ms, corresponding to a pitch limit of about 30 Hz.


Neuron | 2010

Rapid Formation of Robust Auditory Memories: Insights from Noise

Trevor R. Agus; Simon J. Thorpe; Daniel Pressnitzer

Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world.


Journal of Vision | 2008

Bistability for audiovisual stimuli: Perceptual decision is modality specific.

Jean-Michel Hupé; Lu-Ming Joffo; Daniel Pressnitzer

Ambiguous stimuli can produce spontaneous perceptual alternations in the mind of the observer, even though the stimulus itself remains the same. Common features in the temporal dynamics of bistability have been observed for various types of stimuli, both visual and auditory. This raises the question of whether bistable perception results from stereotyped, local competition between stimulus-specific representations or whether it is triggered by some central, supramodal mechanism. We tested the distributed versus centralized hypothesis by asking observers to simultaneously monitor their bistable perception of ambiguous auditory and visual stimuli. Strong interactions between auditory and visual perceptual switches would indicate a central decision mechanism. We used streaming stimuli in the auditory modality and either plaids or apparent motion stimuli in the visual modality. The use of two different sensory modalities allowed the distinction of contextual interactions due to the similarity between stimuli from interactions linked to perceptual decision itself. The long-term dynamics of bistable perception were identical in unimodal and bimodal presentations for all types of stimuli. Surprisingly, even strong short-term cross-modal interactions, when present, did not alter these dynamics. We conclude that bistability can co-occur independently in different sensory modalities. This observation supports models of distributed competition for perceptual decision and awareness.


Experimental Brain Research | 2003

The psychophysics and physiology of comodulation masking release

Jesko L. Verhey; Daniel Pressnitzer; Ian M. Winter

The ability to detect auditory signals from background noise may be enhanced by the addition of energy in frequency regions well removed from the frequency of the signal. However, it is important that this energy is amplitude-modulated in a coherent way across frequencies, i.e. comodulated. This enhancement of signal detectability is known as comodulation masking release (CMR), and in this review we show that CMR is largest if: (1) the total maskers bandwidth is large, (2) the modulation frequency is low, (3) the modulation depth is high, (4) the envelope is regular and, (5) the maskers spectrum level is high. Possible physiological correlates of CMR have been found at different levels of the auditory pathway. Current hypotheses for the underlying physiological mechanisms, including wide-band inhibition or the disruption of masker modulation envelope response, are discussed.


Philosophical Transactions of the Royal Society B | 2012

Multistability in perception: binding sensory modalities, an overview

Jean-Luc Schwartz; Nicolas Grimault; Jean-Michel Hupé; Brian C. J. Moore; Daniel Pressnitzer

This special issue presents research concerning multistable perception in different sensory modalities. Multistability occurs when a single physical stimulus produces alternations between different subjective percepts. Multistability was first described for vision, where it occurs, for example, when different stimuli are presented to the two eyes or for certain ambiguous figures. It has since been described for other sensory modalities, including audition, touch and olfaction. The key features of multistability are: (i) stimuli have more than one plausible perceptual organization; (ii) these organizations are not compatible with each other. We argue here that most if not all cases of multistability are based on competition in selecting and binding stimulus information. Binding refers to the process whereby the different attributes of objects in the environment, as represented in the sensory array, are bound together within our perceptual systems, to provide a coherent interpretation of the world around us. We argue that multistability can be used as a method for studying binding processes within and across sensory modalities. We emphasize this theme while presenting an outline of the papers in this issue. We end with some thoughts about open directions and avenues for further research.


international symposium on neural networks | 2005

Exploration of rank order coding with spiking neural networks for speech recognition

Stéphane Loiselle; Jean Rouat; Daniel Pressnitzer; Simon J. Thorpe

Speech recognition is very difficult in the context of noisy and corrupted speech. Most conventional techniques need huge databases to estimate speech (or noise) density probabilities to perform recognition. We discuss the potential of perceptive speech analysis and processing in combination with biologically plausible neural network processors. We illustrate the potential of such non-linear processing of speech by means of a preliminary test with recognition of French spoken digits from a small speech database


PLOS Computational Biology | 2012

Music in Our Ears: The Biological Bases of Musical Timbre Perception

Kailash Patil; Daniel Pressnitzer; Shihab A. Shamma; Mounya Elhilali

Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sounds physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.


Annals of the New York Academy of Sciences | 2005

Music to Electric Ears: Pitch and Timbre Perception by Cochlear Implant Patients

Daniel Pressnitzer; Julie Bestel; Bernard Fraysse

Abstract: The sounds of music play with many perceptual dimensions. We devised a set of psychophysical procedures to better understand how recipients of cochlear implants perceive basic sound attributes involved in music listening.

Collaboration


Dive into the Daniel Pressnitzer's collaboration.

Top Co-Authors

Avatar

Trevor R. Agus

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clara Suied

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makio Kashino

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge