Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary R. Kidd is active.

Publication


Featured researches published by Gary R. Kidd.


Attention Perception & Psychophysics | 2007

Similarity and categorization of environmental sounds

Brian Gygi; Gary R. Kidd; Charles S. Watson

Four experiments investigated the acoustical correlates of similarity and categorization judgments of environmental sounds. In Experiment 1, similarity ratings were obtained from pairwise comparisons of recordings of 50 environmental sounds. A three-dimensional multidimensional scaling (MDS) solution showed three distinct clusterings of the sounds, which included harmonic sounds, discrete impact sounds, and continuous sounds. Furthermore, sounds from similar sources tended to be in close proximity to each other in the MDS space. The orderings of the sounds on the individual dimensions of the solution were well predicted by linear combinations of acoustic variables, such as harmonicity, amount of silence, and modulation depth. The orderings of sounds also correlated significantly with MDS solutions for similarity ratings of imagined sounds and for imagined sources of sounds, obtained in Experiments 2 and 3—as was the case for free categorization of the 50 sounds (Experiment 4)—although the categorization data were less well predicted by acoustic features than were the similarity data.


Journal of the Acoustical Society of America | 2007

Individual differences in auditory abilities.

Gary R. Kidd; Charles S. Watson; Brian Gygi

Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability.


Journal of Learning Disabilities | 2003

Sensory, Cognitive, and Linguistic Factors in the Early Academic Performance of Elementary School Children The Benton-IU Project

Charles S. Watson; Gary R. Kidd; Douglas G. Horner; Phil Connell; Andrya Lowther; David A. Eddins; Glenn Krueger; David A. Goss; Bill B. Rainey; Mary D. Gospel; Betty U. Watson

Standardized sensory, perceptual, linguistic, intellectual, and cognitive tests were administered to 470 children, approximately 96% of the students entering the first grade in the four elementary schools of Benton County, Indiana, over a 3-year period (1995-1997). The results of 36 tests and subtests administered to entering first graders were well described by a 4-factor solution. These factors and the tests that loaded most heavily on them were reading-related skills (phonological awareness, letter and word identification); visual cognition (visual perceptual abilities, spatial perception, visual memory); verbal cognition (language development, vocabulary, verbal concepts); and speech processing (the ability to understand speech under difficult listening conditions). A cluster analysis identified 9 groups of children, each with a different profile of scores on the 4 factors. Within these groups, the proportion of students with unsatisfactory reading achievement in the first 2 years of elementary school (as reflected in teacher-assigned grades) varied from 3% to 40%. The profiles of factor scores demonstrated the primary influence of the reading-related skills factor on reading achievement and also on other areas of academic performance. The second strongest predictor of reading and mathematics grades was the visual cognition factor, followed by the verbal cognition factor. The speech processing factor was the weakest predictor of academic achievement, accounting for less than 1% of the variance in reading achievement. This project was a collaborative effort of the Benton Community School Corporation and a multidisciplinary group of investigators from Indiana University.


American Journal of Psychology | 1984

Some effects of rhythmic context on melody recognition

Gary R. Kidd; Marilyn Gail Boltz; Mari Riess Jones

The effects of rhythmic context on the ability of listeners to recognize slightly altered versions of 10-tone melodies were examined in three experiments. Listeners judged the melodic equivalence of two auditory patterns when their rhythms were either the same or different. Rhythmic variations produced large effects on a bias measure, indicating that listeners judged melodies to be alike if their rhythms were identical. However, neither rhythm nor pattern rate affected discriminability measures in the first study, in which rhythm was treated as a within subjects variable. The other two studies examined rhythmic context as a between subjects variable. In these, significant effects of temporal uncertainty due to the number and type of rhythms involved in a block of trials, as well as their assignment to standard and comparison melodies on a given trial, were apparent on both discriminability and bias measures. Results were interpreted in terms of the effect of temporal context on the rhythmic targeting of attention.


Frontiers in Systems Neuroscience | 2013

Auditory and cognitive factors underlying individual differences in aided speech-understanding among older adults

Larry E. Humes; Gary R. Kidd; Jennifer J. Lentz

This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance.


Ear and Hearing | 2013

Reconstructing wholes from parts: effects of modality, age, and hearing loss on word recognition.

Krull; Larry E. Humes; Gary R. Kidd

Objective: In this study, the effects of age, hearing loss, and modality on the ability to integrate partial information in degraded stimuli, either speech or text, were examined using isolated words. It was hypothesized that the ability to make use of partial information in speech diminishes with age. It was also hypothesized that additional contributions of cochlear pathology underlying hearing loss would be manifest as a further decrement in performance for older adults with hearing loss, relative to older adults with normal hearing. Furthermore, it was hypothesized that, if the ability to integrate partial information in speech is amodal, then recognition performance for degraded speech would be associated with recognition performance for parallel measures of degraded text. Last, it was hypothesized that, if the nature of the amodal ability to integrate partial information is cognitive, then the performance on auditory and visual measures of word recognition would be correlated with performance on measures of working memory. Design: Twenty-five young adults with normal hearing, 20 older adults with normal hearing, and 21 older adults with hearing loss participated in this study. All participants completed three auditory and two parallel visual tasks consisting of listening to or reading degraded words or text. Older participants also completed a working-memory test battery. Group effects were examined for each of the auditory and visual measures. Performance of older participants on cognitive measures was compared with available data from a younger group participating in a different study in our laboratory (with similar protocol). Correlations between auditory and visual measures of speech recognition were examined for all participants. In addition, correlations between perceptual and cognitive measures were computed for the older participants. Finally, the relationship between dependent auditory measures and other independent measures in older adults were further examined using stepwise linear regression analyses. Results: Of the 10 possible comparisons between the young and the two older groups for the five primary dependent measures, the young performed significantly better than the elderly did, 8 of the 10 times. The two older groups performed similarly for most tasks. In young adults, performance among the auditory tasks and between the two visual tasks was significantly and moderately to strongly correlated. In addition, performance on one of the visual tasks was weakly to moderately significantly correlated with performance on each of the three auditory tasks. Similar moderate to strong correlations were found within the auditory and visual modalities in older adults. However, none of the between-modality correlations were significant in the elderly. Conclusions: In summary, the results of this study suggest that the ability to integrate partial information in degraded words diminishes with age. Once audibility is accounted for, this ability does not seem to diminish with cochlear pathology. In young adults, both modality-specific factors and amodal cognitive factors seem to contribute to this ability. In older adults, although modality-specific factors continue to be important, it seems that the perceptual mechanisms that underlie the processing of degraded speech and text are separate, at least for isolated words. Our results suggest that, when peripheral factors are accounted for, some higher-level, yet-to-be identified, age-related factors contribute to speech-communication difficulties in the elderly.


Journal of the Acoustical Society of America | 1996

Detection of frequency changes in transposed sequences of tones

Gary R. Kidd; Charles S. Watson

The ability to detect frequency changes in transposed sequences of tones was examined in a series of seven experiments. Listeners were asked to judge which of two transposed (i.e., frequency-shifted) comparison patterns preserved the sequence of relative frequencies presented in a preceding standard pattern. The task was performed with five-tone and two-tone patterns under conditions of high and minimal pattern uncertainty. Regardless of pattern length or level of uncertainty, frequency discrimination thresholds for a change in the relative frequency of a single tone were considerably higher when patterns were transposed than when they were not. There was a tendency for performance to worsen with increasing degrees of transposition (primarily under high uncertainty) but most of the detrimental effects of transposition occurred within the first two semitones of transposition. Minimal uncertainty testing resulted in large improvements with five-tone patterns (as much as one order of magnitude), but there was no effect of level of uncertainty on performance with two-tone patterns. Thresholds for changes in two-tone patterns were similar to (although slightly higher than) those for five-tone patterns under minimal-uncertainty testing. This pattern of results reveals that the effects of stimulus complexity (sequence length) and pattern familiarity (level of uncertainty) on relative-frequency discrimination are quite similar to the effects of these variables on absolute-frequency discrimination.


Journal of the Acoustical Society of America | 1995

Temporally directed attending in the discrimination of tempo: Further support for an entrainment model

J. Devin McAuley; Gary R. Kidd

The effect of deviations from temporal expectations on tempo discrimination was investigated using four‐tone isochronous sequences. On each trial, a standard sequence was followed by a comparison sequence that was slightly faster or slower than the standard. Listeners judged which sequence was faster. Temporal deviations consisted of advancing or delaying the onset of the comparison pattern in relation to an onset predicted by an extension of the periodicity of the standard (i.e., an ‘‘expected’’ onset, based on an entrainment model’s predictions). The interonset‐interval in the standard sequence was always 400 ms, and the onset of the comparison sequence was manipulated in relation to an ‘‘expected’’ interval of 800 ms between the onset of the last tone of the standard sequence and the onset of the comparison sequence. Discrimination thresholds were determined for conditions in which the comparison pattern onset was early, late, or at the expected temporal location. Thresholds for ‘‘early’’ conditions we...


Journal of the Acoustical Society of America | 2000

Individual differences in auditory abilities among normal‐hearing listeners

Gary R. Kidd; Charles S. Watson; Brian Gygi

An extended version of the Test of Basic Auditory Capabilities (TBAC) [Watson et al., J. Acoust. Soc. Am. Suppl. 1 71, S73 (1982)] includes 19 subtests that evaluate listeners’ abilities to discriminate and identify auditory stimuli. Stimuli include single tones, tonal sequences, SAM and rippled noise, temporal gaps, nonwords, words, sentences, and environmental sounds. A total of 340 college‐student subjects were tested with this battery of tests. A principal components analysis yielded a four‐factor solution that accounts for roughly 50% of the variance. The first factor primarily reflects loudness and duration discrimination, the second is associated with sensitivity to temporal envelope variation (SAM noises), the third is associated with identification of highly familiar sounds, whether they be speech or nonspeech, and the fourth factor includes pitch‐discrimination and spectral‐temporal tasks, suggesting a common ability to discriminate complex patterns. As in earlier studies in this series, measure...


Journal of the Acoustical Society of America | 1993

Temporally directed attention in the detection and discrimination of auditory pattern components

Gary R. Kidd

Thresholds for detection and frequency discrimination were determined for tones that occurred at unexpected temporal locations within 12‐tone sequences. Expectancies were established by repeated presentations of a standard pattern on each trial. Temporal deviations were introduced in comparison patterns by advancing or delaying the onset of a single ‘‘target’’ tone while maintaining its serial position within the pattern. Rhythmic patterns consisting of 350‐ and 150‐ms intertone intervals were used to allow for a large range of temporal displacements. Thresholds were determined for target tones that were advanced (‘‘early’’ targets) or delayed (‘‘late’’ targets) by various degrees. Thresholds for displaced targets were elevated with respect to nondisplaced targets for both detection and discrimination. However, for most listeners, there was little or no effect of temporal displacement on detection except when targets were advanced by 200 ms or more. Temporal deviations had more consistent effects on frequ...

Collaboration


Dive into the Gary R. Kidd's collaboration.

Top Co-Authors

Avatar

Charles S. Watson

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Larry E. Humes

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Brian Gygi

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar

David A. Eddins

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James D. Miller

Central Institute for the Deaf

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge