Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jasmine M. S. Grimsley is active.

Publication


Featured researches published by Jasmine M. S. Grimsley.


PLOS ONE | 2011

Development of social vocalizations in mice.

Jasmine M. S. Grimsley; Jessica J. M. Monaghan; Jeffrey J. Wenstrup

Adult mice are highly vocal animals, with both males and females vocalizing in same sex and cross sex social encounters. Mouse pups are also highly vocal, producing isolation vocalizations when they are cold or removed from the nest. This study examined patterns in the development of pup isolation vocalizations, and compared these to adult vocalizations. In three litters of CBA/CaJ mice, we recorded isolation vocalizations at ages postnatal day 5 (p5), p7, p9, p11, and p13. Adult vocalizations were obtained in a variety of social situations. Altogether, 28,384 discrete vocal signals were recorded using high-frequency-sensitive equipment and analyzed for syllable type, spectral and temporal features, and the temporal sequencing within bouts. We found that pups produced all but one of the 11 syllable types recorded from adults. The proportions of syllable types changed developmentally, but even the youngest pups produced complex syllables with frequency-time variations. When all syllable types were pooled together for analysis, changes in the peak frequency or the duration of syllables were small, although significant, from p5 through p13. However, individual syllable types showed different, large patterns of change over development, requiring analysis of each syllable type separately. Most adult syllables were substantially lower in frequency and shorter in duration. As pups aged, the complexity of vocal bouts increased, with a greater tendency to switch between syllable types. Vocal bouts from older animals, p13 and adult, had significantly more sequential structure than those from younger mice. Overall, these results demonstrate substantial changes in social vocalizations with age. Future studies are required to identify whether these changes result from developmental processes affecting the vocal tract or control of vocalization, or from vocal learning. To provide a tool for further research, we developed a MATLAB program that generates bouts of vocalizations that correspond to mice of different ages.


PLOS ONE | 2012

Social Vocalizations of Big Brown Bats Vary with Behavioral Context

Marie A. Gadziola; Jasmine M. S. Grimsley; Paul A. Faure; Jeffrey J. Wenstrup

Bats are among the most gregarious and vocal mammals, with some species demonstrating a diverse repertoire of syllables under a variety of behavioral contexts. Despite extensive characterization of big brown bat (Eptesicus fuscus) biosonar signals, there have been no detailed studies of adult social vocalizations. We recorded and analyzed social vocalizations and associated behaviors of captive big brown bats under four behavioral contexts: low aggression, medium aggression, high aggression, and appeasement. Even limited to these contexts, big brown bats possess a rich repertoire of social vocalizations, with 18 distinct syllable types automatically classified using a spectrogram cross-correlation procedure. For each behavioral context, we describe vocalizations in terms of syllable acoustics, temporal emission patterns, and typical syllable sequences. Emotion-related acoustic cues are evident within the call structure by context-specific syllable types or variations in the temporal emission pattern. We designed a paradigm that could evoke aggressive vocalizations while monitoring heart rate as an objective measure of internal physiological state. Changes in the magnitude and duration of elevated heart rate scaled to the level of evoked aggression, confirming the behavioral state classifications assessed by vocalizations and behavioral displays. These results reveal a complex acoustic communication system among big brown bats in which acoustic cues and call structure signal the emotional state of a caller.


PLOS ONE | 2012

Processing of communication calls in Guinea pig auditory cortex.

Jasmine M. S. Grimsley; Sharad J. Shanbhag; Alan R. Palmer; Mark N. Wallace

Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.


Journal of Neurophysiology | 2012

A novel coding mechanism for social vocalizations in the lateral amygdala

Marie A. Gadziola; Jasmine M. S. Grimsley; Sharad J. Shanbhag; Jeffrey J. Wenstrup

The amygdala plays a central role in evaluating the significance of acoustic signals and coordinating the appropriate behavioral responses. To understand how amygdalar responses modulate auditory processing and drive emotional expression, we assessed how neurons respond to and encode information that is carried within complex acoustic stimuli. We characterized responses of single neurons in the lateral nucleus of the amygdala to social vocalizations and synthetic acoustic stimuli in awake big brown bats. Neurons typically responded to most of the social vocalizations presented (mean = nine of 11 vocalizations) but differentially modulated both firing rate and response duration. Surprisingly, response duration provided substantially more information about vocalizations than did spike rate. In most neurons, variation in response duration depended, in part, on persistent excitatory discharge that extended beyond stimulus duration. Information in persistent firing duration was significantly greater than in spike rate, and the majority of neurons displayed more information in persistent firing, which was more likely to be observed in response to aggressive vocalizations (64%) than appeasement vocalizations (25%), suggesting that persistent firing may relate to the behavioral context of vocalizations. These findings suggest that the amygdala uses a novel coding strategy for discriminating among vocalizations and underscore the importance of persistent firing in the general functioning of the amygdala.


The Journal of Neuroscience | 2013

Coding the Meaning of Sounds: Contextual Modulation of Auditory Responses in the Basolateral Amygdala

Jasmine M. S. Grimsley; Emily Hazlett; Jeffrey J. Wenstrup

Female mice emit a low-frequency harmonic (LFH) call in association with distinct behavioral contexts: mating and physical threat or pain. Here we report the results of acoustic, behavioral, and neurophysiological studies of the contextual analysis of these calls in CBA/CaJ mice. We first show that the acoustical features of the LFH call do not differ between contexts. We then show that male mice avoid the LFH call in the presence of a predator cue (cat fur) but are more attracted to the same exemplar of the call in the presence of a mating cue (female urine). The males thus use nonauditory cues to determine the meaning of the LFH call, but these cues do not generalize to noncommunication sounds, such as noise bursts. We then characterized neural correlates of contextual meaning of the LFH call in responses of basolateral amygdala (BLA) neurons from awake, freely moving mice. There were two major findings. First, BLA neurons typically displayed early excitation to all tested behaviorally aversive stimuli. Second, the nonauditory context modulates the BLA population response to the LFH call but not to the noncommunication sound. These results suggest that the meaning of communication calls is reflected in the spike discharge patterns of BLA neurons.


Frontiers in Behavioral Neuroscience | 2016

Contextual Modulation of Vocal Behavior in Mouse: Newly Identified 12 kHz “Mid-Frequency” Vocalization Emitted during Restraint

Jasmine M. S. Grimsley; Saloni Sheth; Neil Vallabh; Calum Alex Grimsley; Jyoti Bhattal; Maeson S. Latsko; Aaron M. Jasnow; Jeffrey J. Wenstrup

While several studies have investigated mouse ultrasonic vocalizations (USVs) emitted by isolated pups or by males in mating contexts, studies of behavioral contexts other than mating and vocalization categories other than USVs have been limited. By improving our understanding of the vocalizations emitted by mice across behavioral contexts, we will better understand the natural vocal behavior of mice and better interpret vocalizations from mouse models of disease. Hypothesizing that mouse vocal behavior would differ depending on behavioral context, we recorded vocalizations from male CBA/CaJ mice across three behavioral contexts including mating, isolation, and restraint. We found that brief restraint elevated blood corticosterone levels of mice, indicating increased stress relative to isolation. Further, after 3 days of brief restraint, mice displayed behavioral changes indicative of stress. These persisted for at least 2 days after restraint. Contextual differences in mouse vocal behavior were striking and robust across animals. Thus, while USVs were the most common vocalization type across contexts, the spectrotemporal features of USVs were context-dependent. Compared to the mating context, vocalizations during isolation and restraint displayed a broader frequency range, with a greater emphasis on frequencies below 50 kHz. These contexts also included more non-USV vocal categories and different vocal patterns. We identified a new Mid-Frequency Vocalization, a tonal vocalization with fundamental frequencies below 18 kHz, which was almost exclusively emitted by mice undergoing restraint stress. These differences combine to form vocal behavior that is grossly different among behavioral contexts and may reflect the level of anxiety in these contexts.


Frontiers in Behavioral Neuroscience | 2013

Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based “mouse pup syllable classification calculator”

Jasmine M. S. Grimsley; Marie A. Gadziola; Jeffrey J. Wenstrup

Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.


Neuroreport | 2011

Different representations of tooth chatter and purr call in guinea pig auditory cortex.

Jasmine M. S. Grimsley; Alan R. Palmer; Mark N. Wallace

Multielectrode arrays were used to compare responses to tooth chatter and purr calls from all eight areas of the auditory cortex in anaesthetized guinea pigs. These calls have different behavioural contexts: males emit tooth chatters in aggressive encounters and the purr call during courtship behaviour. Of the two core areas, the primary auditory cortex responded better to both signals than the dorsocaudal core area. Of the six belt areas, the ventral transition area was found to be exceptionally sensitive to tooth chatter and less responsive to purr. The small rostral field responded faithfully to the purr, but not to tooth chatter, and ventrorostral belt often showed on/off responses; other belt areas were unresponsive.


PLOS ONE | 2018

Communication calls produced by electrical stimulation of four structures in the guinea pig brain

David B. Green; Trevor M. Shackleton; Jasmine M. S. Grimsley; Oliver Zobay; Alan R. Palmer; Mark N. Wallace

One of the main central processes affecting the cortical representation of conspecific vocalizations is the collateral output from the extended motor system for call generation. Before starting to study this interaction we sought to compare the characteristics of calls produced by stimulating four different parts of the brain in guinea pigs (Cavia porcellus). By using anaesthetised animals we were able to reposition electrodes without distressing the animals. Trains of 100 electrical pulses were used to stimulate the midbrain periaqueductal grey (PAG), hypothalamus, amygdala, and anterior cingulate cortex (ACC). Each structure produced a similar range of calls, but in significantly different proportions. Two of the spontaneous calls (chirrup and purr) were never produced by electrical stimulation and although we identified versions of chutter, durr and tooth chatter, they differed significantly from our natural call templates. However, we were routinely able to elicit seven other identifiable calls. All seven calls were produced both during the 1.6 s period of stimulation and subsequently in a period which could last for more than a minute. A single stimulation site could produce four or five different calls, but the amygdala was much less likely to produce a scream, whistle or rising whistle than any of the other structures. These three high-frequency calls were more likely to be produced by females than males. There were also differences in the timing of the call production with the amygdala primarily producing calls during the electrical stimulation and the hypothalamus mainly producing calls after the electrical stimulation. For all four structures a significantly higher stimulation current was required in males than females. We conclude that all four structures can be stimulated to produce fictive vocalizations that should be useful in studying the relationship between the vocal motor system and cortical sensory representation.


Journal of Neuroscience Methods | 2015

An improved approach to separating startle data from noise

Calum Alex Grimsley; Ryan J. Longenecker; Merri J. Rosen; Jesse W. Young; Jasmine M. S. Grimsley; Alexander V. Galazyuk

Collaboration


Dive into the Jasmine M. S. Grimsley's collaboration.

Top Co-Authors

Avatar

Jeffrey J. Wenstrup

Northeast Ohio Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sharad J. Shanbhag

Northeast Ohio Medical University

View shared research outputs
Top Co-Authors

Avatar

Alan R. Palmer

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Calum Alex Grimsley

Northeast Ohio Medical University

View shared research outputs
Top Co-Authors

Avatar

Emily Hazlett

Northeast Ohio Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander V. Galazyuk

Northeast Ohio Medical University

View shared research outputs
Top Co-Authors

Avatar

Brett R. Schofield

Northeast Ohio Medical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge