Kathy Pichora-Fuller
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kathy Pichora-Fuller.
Archive | 2010
Bruce A. Schneider; Kathy Pichora-Fuller; Meredyth Daneman
Older individuals often find it difficult to communicate, especially in group situations, because they are unable to keep up with the flow of conversation or are too slow in comprehending what they are hearing. These communication difficulties are often exacerbated by negative stereotypes held by their communication partners who often perceive older adults as less competent than they actually are (Ryan et al. 1986). Sometimes, older adults’ communication problems motivate them, often at the prompting of their family and friends, to seek help from hearing specialists (O’Mahoney et al. 1996). Quite often, however, older adults and/or their family members wonder if these comprehension difficulties are a sign of cognitive decline. Such uncertainty on the part of both older adults and their family members with respect to the source of communication difficulties is understandable given that age-related changes in the comprehension of spoken language could be due to age-related changes in hearing, to age-related declines in cognitive functioning, or to interactions between these two levels of processing. To participate effectively in a multitalker conversation, listeners need to do more than simply recognize and repeat speech. They have to keep track of who said what, extract the meaning of each utterance, store it in memory for future use, integrate the incoming information with what each conversational participant has said in the past, and draw on the listener’s own knowledge of the topic under consideration to extract general themes and formulate responses. In other words, effective communication requires not only an intact auditory system but also an intact cognitive system.
Ear and Hearing | 2015
Payam Ezzatian; Liang Li; Kathy Pichora-Fuller; Bruce A. Schneider
Objective: To determine whether the time course for the buildup of auditory stream segregation differs between younger and older adults. Design: Word recognition thresholds were determined for the first and last keywords in semantically anomalous but syntactically correct sentences (e.g., “A rose could paint a fish”) when the target sentences were masked by speech-spectrum noise, 3-band vocoded speech, 16-band vocoded speech, intact and colocated speech, and intact and spatially separated speech. A significant reduction in thresholds from the first to the last keyword was interpreted as indicating that stream segregation improved with time. Results: The buildup of stream segregation is slowed for both age groups when the masker is intact, colocated speech. Conclusions: Older adults are more disadvantaged; for them, stream segregation is also slowed even when a speech masker is spatially separated, conveys little meaning (3-band vocoding), and vocal fine structure cues are impoverished but envelope cues remain available (16-band vocoding).
Journal of the Acoustical Society of America | 2008
Heather Macdonald; Matthew H. Davis; Kathy Pichora-Fuller; Ingrid S. Johnsrude
Meaningful semantic context has been demonstrated to improve comprehension of spoken sentences by young and old adults, especially in difficult listening conditions. Evidence for this benefit is based largely on data collected using SPIN sentences, highly structured sentences with a predictable or unpredictable final word. We asked young (14 participants, aged 18‐25) and older adults (20 participants, aged 60‐75) to report entire sentences which were less structured in nature and contained either a meaningful or anomalous global semantic context. Sentences were mixed with signal‐correlated noise, at 9 signal‐to‐noise ratios (‐6 to +2 dB), and also presented without noise. Comprehension by both groups benefited from meaningful context, without a clear overall difference in the amount of benefit obtained. We used fMRI to look at neural activity associated with deriving benefit from meaningful context. Whole‐brain EPI data were acquired from young (16 participants, aged 19‐26) adults using a sparse imaging d...
Journal of the Acoustical Society of America | 2009
Payam Ezzatian; Liang Li; Kathy Pichora-Fuller; Bruce A. Schneider
In a previous study, [Freyman et al. (2004)] showed that presenting listeners with all but the last word of a target nonsense sentence immediately prior to presenting the full sentence in a noisy background, produced a greater release from masking when the masker was two‐talker nonsense speech than when it was speech‐spectrum noise, thereby demonstrating that an auditory prime could produce a release from informational masking. In Experiment 1 of this study we showed that auditory priming produced an equivalent amount of release from informational masking in good‐hearing younger and older adults. To investigate the extent to which this release from informational masking was due to the semantic content of the prime, in Experiment 2 we noise‐vocoded the prime (using three bands) to remove semantic content, while retaining the prime’s amplitude envelope. This manipulation eliminated any release from informational masking. In Experiment 3, when the speech masker, but not the prime was vocoded, the performance...
Journal of the Acoustical Society of America | 2006
Signy Sheldon; Kathy Pichora-Fuller; Bruce A. Schneider
Older adults with good audiograms have difficulty understanding speech in noise. Age‐related differences have been found on some temporal processing measures such as gap detection; however, older adults are believed to have well‐preserved ability to use envelope cues to identify words. Following Shannon et al. (1995), we used noise‐vocoded speech such that the amplitude envelope of speech was retained in frequency bands but filled with noise, thereby obliterating fine structure cues within each band. In experiment 1, younger and older listeners heard a list of words. Each word was presented first with one vocoded frequency band, and the number of bands was incremented until the listener correctly identified the word. The average number of bands required for correct identification was found to be identical for both age groups. In experiment 2, both age groups identified words in four blocked noise‐vocoded conditions (16, 8, 4, and 2 bands). Younger adults outperformed older adults. Although older adults we...
Journal of the Acoustical Society of America | 2006
Ewen N. MacDonald; Kathy Pichora-Fuller; Bruce A. Schneider; Willy Wong
Age‐related changes in the auditory system have been attributed to three independent factors: OHC damage, changes in endocochlear potentials, and loss of neural synchrony. In previous studies, a jitter algorithm has been used to simulate the loss of synchrony in young adults (MacDonald et al., 2005). In this study, the effect of jitter on old adults with good audiograms in the speech range is explored. SPIN‐R sentences were presented in two SNR and three processing conditions: intact, jitter, and smear. The parameters of the jittering algorithm were the same as those used with young adults. The parameters of smearing algorithm were chosen to match the spectral distortion produced by jitter algorithm. While both the jitter and smear conditions resulted in a significant decline in word identification, the decline was largest in the jitter condition. Psychometric functions were fitted to the data and compared to previous work with young adults. The comparison supports the hypothesis that loss of synchrony ca...
Journal of the Acoustical Society of America | 2005
Ewen N. MacDonald; Kathy Pichora-Fuller; Bruce A. Schneider
A jittering technique to disrupt the periodicity of the signal was used to simulate the effect of the loss of temporal synchrony coding believed to characterize auditory aging. In one experiment jittering was used to distort the frequency components below 1.2 kHz and in a second experiment the components above 1.2 kHz were distorted. To control for spectral distortion introduced by jittering, comparison conditions were created using a smearing technique (Baer and Moore, 1993). In both experiments, 16 normal hearing young adult subjects were presented with SPIN sentences in three conditions (intact, jittered, and smeared) at 0 and 8 dB SNR. When the low frequencies were distorted, speech intelligibility in the jittered conditions was significantly worse than in the intact and smeared conditions, but the smeared and intact conditions were equivalent. When the high frequencies were distorted, speech intelligibility was reduced similarly by jittering and smearing. On low‐context jittered sentences, results fo...
Ear and Hearing | 2010
Payam Ezzatian; Liang Li; Kathy Pichora-Fuller; Bruce A. Schneider
Canadian Acoustics | 2006
Shazia Ahmed; Matthew King; Timothy W. Morrish; Ewelina Zaszewska; Kathy Pichora-Fuller
Canadian Acoustics | 2007
Huiwen Goy; Kathy Pichora-Fuller; Pascal van Lieshout; Gurjit Singh; Bruce A. Schneider