Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurent Demany is active.

Publication


Featured researches published by Laurent Demany.


Journal of the Acoustical Society of America | 1985

Perceptual learning in frequency discrimination

Laurent Demany

This study was concerned with the effects of training on the frequency discrimination ability of human listeners. Frequency discrimination at 200 Hz was tested before and after training. Four groups of listeners received training in four different frequency regions, 200, 360, 2500, and 6000 Hz. It was found that training at 200, 360, and 2500 Hz all provided comparable improvement in discrimination performance at 200 Hz whereas training at 6000 Hz provided less improvement. This result is consistent with the idea that frequency discrimination and pitch perception are mediated by different processes at high (greater than 5000 Hz) and low (less than 5000 Hz) frequencies.


Journal of the Acoustical Society of America | 2002

Learning to perceive pitch differences

Laurent Demany; Catherine Semal

This paper reports two experiments concerning the stimulus specificity of pitch discrimination learning. In experiment 1, listeners were initially trained, during ten sessions (about 11,000 trials), to discriminate a monaural pure tone of 3000 Hz from ipsilateral pure tones with slightly different frequencies. The resulting perceptual learning (improvement in discrimination thresholds) appeared to be frequency-specific since, in subsequent sessions, new learning was observed when the 3000-Hz standard tone was replaced by a standard tone of 1200 Hz, or 6500 Hz. By contrast, a subsequent presentation of the initial tones to the contralateral ear showed that the initial learning was not, or was only weakly, ear-specific. In experiment 2, training in pitch discrimination was initially provided using complex tones that consisted of harmonics 3-7 of a missing fundamental (near 100 Hz for some listeners, 500 Hz for others). Subsequently, the standard complex was replaced by a standard pure tone with a frequency which could be either equal to the standard complexs missing fundamental or remote from it. In the former case, the two standard stimuli were matched in pitch. However, this perceptual relationship did not appear to favor the transfer of learning. Therefore, the results indicated that pitch discrimination learning is, at least to some extent, timbre-specific, and cannot be viewed as a reduction of an internal noise which would affect directly the output of a neural device extracting pitch from both pure tones and complex tones including low-rank harmonics.


Journal of the Acoustical Society of America | 1982

The perceptual reality of tone chroma in early infancy

Laurent Demany; Françoise Armand

In the framework of a habituation‐recovery paradigm, we presented 40 infants, aged 70–110 days, two cyclically repeating melodic sequences of pure tones. The habituation sequence, A B C A B C …, was the same for all infants; its component tones, A, B, and C, had a level of 80 dB SPL and respective frequencies of 736.7, 487.4, and 428.1 Hz. B and C were replaced by X and Y in the test sequence, A X Y A X Y …, which had three different versions, heard by three separate groups of infants. For each group, X and Y were 85 dB SPL and X Y was an exact musical transposition of B C at lower frequencies. The interval between B and X (and C and Y) was equal to 1003 cents for group 1 (N = 12), 1200 cents—i.e., one octave—for group 2 (N = 16), and 1389 cents for group 3 (N = 12). Significant reactions to the sequence change were observed in groups 1 and 3, but not in group 2. Young infants would therefore seem to perceive as strongly similar two pure tones one octave apart. This tends to support nativist explanations ...


Infant Behavior & Development | 1982

Auditory stream segregation in infancy

Laurent Demany

In a rapid and repeating melodic sequence, S = … abcdabcd …, adults may perceive, instead of one coherent string of tones, several co-occurring segregated streams (e.g., … a-c-a-c- … and … b-d-b-d- …); melodic stream segregation obeys the Gestalt principle of pitch similarity. Can evidence for stream segregation processes be found in young infants? We reasoned that S should be discriminable from its retrogradation, Sr = … adcbadcb …, if adjacent tones are grouped in the same stream, but not if adjacent tones are systematically assigned to separate streams. Same / different judgments were obtained from adults on various S-Sr pairs. The same pairs of sequences were then presented to 7–15-week-old infants in a habituation / dishabituation paradigm. The discriminative abilities of the adults and infants varied in parallel as a function of changes in the melodic structure of S and Sr. Our results suggest that stream segregation processes governed by Gestalt factors are operative very early in life.


Journal of the Acoustical Society of America | 1996

Speech versus nonspeech in pitch memory

Catherine Semal; Laurent Demany; Kazuo Ueda; Pierre Halle

The memory trace of the pitch sensation induced by a standard tone (S) can be strongly degraded by subsequently intervening sounds (I). Deutsch [Science 168, 1604-1605 (1970)] suggested that the degradation is much weaker when the I sounds are words than when they are tones. In Deutschs study, however, the pitch relations between S and the I words were not controlled. The first experiment reported here was similar to that of Deutsch except that the speech and nonspeech stimuli used as I sounds were matched in pitch. The speech stimuli were monosyllabic words derived from recordings of a real voice, whereas the nonspeech stimuli were harmonic complex tones with a flat spectral profile. These two kinds of I sound were presented at a variable pitch distance (delta-pitch) from the S tone. In a same/different paradigm, S had to be compared with a tone presented 6 s later; this comparison tone could be either identical to S or shifted in pitch by +/- 75 cents. The nature of the I sounds (spoken words versus tones) affected discrimination performance, but markedly less than did delta-pitch. Performance was better when delta-pitch was large than when it was small, for the speech as well as nonspeech I sounds. In a second experiment, the S sounds and comparison sounds were spoken words instead of tones. The differences to be detected were restricted to shifts in fundamental frequency (and thus pitch), the other acoustic attributes of the words being left unchanged. Again, discrimination performance was positively related to delta-pitch. This time, the nature of the I sounds (words versus tones) had no significant effect. Overall, the results suggest that, in auditory short-term memory, the pitch of speech sounds is not stored differently from the pitch of nonspeech sounds.


Journal of the Acoustical Society of America | 2006

Individual differences in the sensitivity to pitch direction

Catherine Semal; Laurent Demany

It is commonly assumed that one can always assign a direction-upward or downward-to a percept of pitch change. The present study shows that this is true for some, but not all, listeners. Frequency difference limens (FDLs, in cents) for pure tones roved in frequency were measured in two conditions. In one condition, the task was to detect frequency changes; in the other condition, the task was to identify the direction of frequency changes. For three listeners, the identification FDL was about 1.5 times smaller than the detection FDL, as predicted (counterintuitively) by signal detection theory under the assumption that performance in the two conditions was limited by one and the same internal noise. For three other listeners, however, the identification FDL was much larger than the detection FDL. The latter listeners had relatively high detection FDLs. They had no difficulty in identifying the direction of just-detectable changes in intensity, or in the frequency of amplitude modulation. Their difficulty in perceiving the direction of small frequency/pitch changes showed up not only when the task required absolute judgments of direction, but also when the directions of two successive frequency changes had to be judged as identical or different.


Archive | 2008

The Role of Memory in Auditory Perception

Laurent Demany; Catherine Semal

Sound sources produce physical entities that, by definition, are extended in time. Moreover, whereas a visual stimulus lasting only 1 ms can provide very rich information, that is not the case for a 1-ms sound. Humans are indeed used to processing much longer acoustic entities. In view of this, it is natural to think that “memory” (in the broadest sense) must play a crucial role in the processing of information provided by sound sources. However, a stronger point can be made: It is reasonable to state that, at least in the auditory domain, “perception” and “memory” are so deeply interrelated that there is no definite boundary between them. Such a view is supported by numerous empirical facts and simple logical considerations. Consider, as a preliminary example, the perception of loudness. The loudness of a short sound, e.g., a burst of white noise, depends on its duration (Scharf 1978). Successive noise bursts equated in acoustic power and increasing in duration from, say, 5 ms to about 200 ms are perceived not only as longer and longer but also as louder and louder. Loudness is thus determined by a temporal integration of acoustic power. This temporal integration implies that a “percept” of loudness is in fact the content of an auditory memory. A commonsense notion is that memory is a consequence of perception and cannot be a cause of it. In the case of loudness, however, perception appears to be a consequence of memory. This is not a special case: Many other examples of such a relationship between perception and memory can be given. Consider, once more, the perception of white noise. A long sample of white noise, i.e., a completely random signal, is perceived as a static “shhhhh...” in which no event or feature is discernible. But if a 500-ms or 1-s excerpt of the same noise is taken at random and cyclically repeated, the new sound obtained is rapidly perceived as quite different. What is soon heard is a repeating sound pattern filled with perceptual events such as “clanks” and “rasping” (Guttman and Julesz 1963; Warren 1982, Chapter 3; Kaernbach 1993, 2004). It can be said that the perceptual events in question are a creation of memory, since they do not exist in the absence of repetitions. Kubovy and Howard (1976) provided another thought-provoking example. They constructed sequences of binaural “chords”


Journal of the Acoustical Society of America | 1989

Detection thresholds for sinusoidal frequency modulation.

Laurent Demany; Catherine Semal

An adaptive forced-choice procedure was used to measure, in four normal-hearing subjects, detection thresholds for sinusoidal frequency modulation as a function of carrier frequency (fc, from 250 to 4000 Hz) and modulation frequency (fmod. from 1 to 64 Hz). The results show that, for a wide range of fmod values, fc and fmod have almost independent effects on the thresholds when the thresholds are expressed as just-noticeable frequency swings and plotted on a log scale. In two subjects, the effect of fc on the thresholds was compared to the effect of standard frequency on the frequency just noticeable differences (jnds) of successive and steady tones. In agreement with previous data [H. Fastl, J. Acoust. Soc. Am. 63, 275-277 (1978)], it was found that the two effects are significantly different if the frequency jnds are measured with long-duration tones. However, it was also found that the two effects are similar if the frequency jnds are measured with 25-ms tones. These results support the idea that, at least for low fmod values, the detection of continuous and periodic frequency modulations is mediated by a pitch-sampling process using a temporal window of about 25 ms.


Psychological Science | 2008

Auditory Change Detection: Simple Sounds Are Not Memorized Better Than Complex Sounds

Laurent Demany; Wiebke Trost; Maja Šerman; Catherine Semal

Previous research has shown that the detectability of a local change in a visual image is essentially independent of the complexity of the image when the interstimulus interval (ISI) is very short, but is limited by a low-capacity memory system when the ISI exceeds 100 ms. In the study reported here, listeners made same/different judgments on pairs of successive “chords” (sums of pure tones with random frequencies). The change to be detected was always a frequency shift in one of the tones, and which tone would change was unpredictable. Performance worsened as the number of tones increased, but this effect was not larger for 2-s ISIs than for 0-ms ISIs. Similar results were obtained when a chord was followed by a single tone that had to be judged as higher or lower than the closest component of the chord. Overall, our data suggest that change detection is based on different mechanisms in audition and vision.


Experimental Brain Research | 2010

Fundamental differences in change detection between vision and audition.

Laurent Demany; Catherine Semal; Jean-René Cazalets; Daniel Pressnitzer

We compared auditory change detection to visual change detection using closely matched stimuli and tasks in the two modalities. On each trial, participants were presented with a test stimulus consisting of ten elements: pure tones with various frequencies for audition, or dots with various spatial positions for vision. The test stimulus was preceded or followed by a probe stimulus consisting of a single element, and two change-detection tasks were performed. In the “present/absent” task, the probe either matched one randomly selected element of the test stimulus or none of them; participants reported present or absent. In the “direction-judgment” task, the probe was always slightly shifted relative to one randomly selected element of the test stimulus; participants reported the direction of the shift. Whereas visual performance was systematically better in the present/absent task than in the direction-judgment task, the opposite was true for auditory performance. Moreover, whereas visual performance was strongly dependent on selective attention and on the time interval separating the probe from the test stimulus, this was not the case for auditory performance. Our results show that small auditory changes can be detected automatically across relatively long temporal gaps, using an implicit memory system that seems to have no similar counterpart in the visual domain.

Collaboration


Dive into the Laurent Demany's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert P. Carlyon

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Lorenzi

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge