Sydney Trask
University of Vermont
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sydney Trask.
Learning & Behavior | 2016
Mark E. Bouton; Sydney Trask
In three experiments with rat subjects, we examined the effects of the discriminative effects of reinforcers that were presented during or after operant extinction. Experiments 1 and 2 examined resurgence, in which an extinguished operant response (R1) recovers when a second behavior (R2) that has been reinforced to replace it is also placed in extinction. The results of Experiment 1 suggest that the amount of R1’s resurgence is a decreasing linear function of the interreinforcement interval used during the reinforcement of R2. In Experiment 2, R1 was reinforced with one outcome (O1), and R2 was then reinforced with a second outcome (O2) while R1 was extinguished. In resurgence tests, response-independent (noncontingent) presentations of O2 prevented resurgence of R1, which otherwise occurred when testing was conducted with either no reinforcers or noncontingent presentations of O1. In Experiment 3, we then examined the effects of noncontingent O1 and O2 presentations after simple extinction in either the presence or the absence of noncontingent presentations of O2. Overall, the results are consistent with a role for the discriminative properties of the reinforcer in controlling operant behavior. In resurgence, the reinforcer used during response elimination provides a distinct context that controls the inhibition of R1. The results are less consistent with an alternative view emphasizing the disrupting effects of alternative reinforcement.
Learning & Behavior | 2014
Sydney Trask; Mark E. Bouton
Recent research has suggested that operant responses can be weakened when they are tested in new contexts. The present experiment was therefore designed to test whether animals can learn a context–(R–O) relation. Rats were given training sessions in context A, in which one response (R1; lever pressing or chain pulling) produced one outcome (O1) and another response (R2; chain pulling or lever pressing) produced another outcome (O2) on variable interval reinforcement schedules. These sessions were intermixed with training in context B, where R1 now produced O2 and R2 produced O1. Given the arrangement, it was possible for the animal to learn two distinct R–O associations in each specific context. To test for them, rats were then given aversion conditioning with O2 by pairing its presentation with lithium-chloride-induced illness. Following the aversion conditioning, the rats were given an extinction test with both R1 and R2 available in each context. During testing, rats showed a selective suppression in each context of the response that had been paired with the reinforcer subsequently associated with illness. Rats could not have performed this way without knowledge of the R–O associations in effect in each specific context, lending support to the hypothesis that rats learn context–(R–O) associations. However, despite a complete aversion to O2, responding was not completely suppressed, leaving the possibility open that rats form context–R associations in addition to context–(R–O) associations.
Behavioural Processes | 2017
Sydney Trask; Eric A. Thrailkill; Mark E. Bouton
An occasion setter is a stimulus that modulates the ability of another stimulus to control behavior. A rich history of experimental investigation has identified several important properties that define occasion setters and the conditions that give rise to occasion setting. In this paper, we first consider the basic hallmarks of occasion setting in Pavlovian conditioning. We then review research that has examined the mechanisms underlying the crucial role of context in Pavlovian and instrumental extinction. In Pavlovian extinction, evidence suggests that the extinction context can function as a negative occasion setter whose role is to disambiguate the current meaning of the conditioned stimulus; the conditioning context can also function as a positive occasion setter. In operant extinction, in contrast, the extinction context may directly inhibit the response, and the conditioning context can directly excite it. We outline and discuss the key results supporting these distinctions.
Learning & Behavior | 2016
Sydney Trask; Mark E. Bouton
Previous research on the resurgence effect has suggested that reinforcers that are presented during the extinction of an operant behavior can control inhibition of the response. To further test this hypothesis, in three experiments with rat subjects we examined the effectiveness of using reinforcers that were presented during extinction as a means of attenuating or inhibiting the operant renewal effect. In Experiment 1, lever pressing was reinforced in Context A, extinguished in Context B, and then tested in Context A. Renewal of responding that occurred during the final test was attenuated when a distinct reinforcer that had been presented independent of responding during extinction was also presented during the renewal test. Experiment 2 established that this effect depended on the reinforcer being featured as a part of extinction (and thus associated with response inhibition). Experiment 3 then showed that the reinforcers presented during extinction suppressed performance in both the extinction and renewal contexts; the effects of the physical and reinforcer contexts were additive. Together, the results further suggest that reinforcers associated with response inhibition can serve a discriminative role in suppressing behavior and may be an effective stimulus that can attenuate operant relapse.
Journal of experimental psychology. Animal learning and cognition | 2016
Mark E. Bouton; Sydney Trask; Rodrigo Carranza-Jasso
Five experiments tested implications of the idea that instrumental (operant) extinction involves learning to inhibit the learned response. All experiments used a discriminated operant procedure in which rats were reinforced for lever pressing or chain pulling in the presence of a discriminative stimulus (S), but not in its absence. In Experiment 1, extinction of the response (R) in the presence of S weakened responding in S, but equivalent nonreinforced exposure to S (without the opportunity to make R) did not. Experiment 2 replicated that result and found that extinction of R had no effect on a different R that had also been reinforced in the stimulus. In Experiments 3 and 4, rats first learned to perform several different stimulus and response combinations (S1R1, S2R1, S3R2, and S4R2). Extinction of a response in one stimulus (i.e., S1R1) transferred and weakened the same response, but not a different response, when it was tested in another stimulus (i.e., S2R1 but not S3R2). In Experiment 5, extinction still transferred between S1 and S2 when the stimuli set the occasion for Rs association with different types of food pellets. The results confirm the importance of response inhibition in instrumental extinction: Nonreinforcement of the response in S causes the most effective suppression of responding, and response suppression is specific to the response but transfers and influences performance of the same response when it is occasioned by other stimuli. Theoretical and practical implications are discussed. (PsycINFO Database Record
The Journal of Neuroscience | 2017
Sydney Trask; Megan L. Shipman; John T. Green; Mark E. Bouton
Operant responding in rats provides an analog to voluntary behavior in humans and is used to study maladaptive behaviors, such as overeating, drug taking, or relapse. In renewal paradigms, extinguished behavior recovers when tested outside the context where extinction was learned. Inactivation of the prelimbic (PL) region of the medial prefrontal cortex by baclofen/muscimol (B/M) during testing attenuates renewal when tested in the original acquisition context after extinction in another context (ABA renewal). Two experiments tested the hypothesis that the PL is important in context-dependent responding learned during conditioning. In the first, rats learned to lever-press for a sucrose-pellet reward. Following acquisition, animals were infused with either B/M or vehicle in the PL and tested in the acquisition context (A) and in a different context (B). All rats showed a decrement in responding when switched from Context A to Context B, but PL inactivation decreased responding only in Context A. Experiment 2a examined the effects of PL inactivation on ABC renewal in the same rats. Here, following reacquisition of the response, responding was extinguished in a new context (C). Following infusions of B/M or vehicle in the PL, responding was tested in Context C and another new context (D). The rats exhibited ACD renewal regardless of PL inactivation. Experiment 2b demonstrated that PL inactivation attenuated the ABA renewal effect in the same animals, replicating earlier results and demonstrating that cannulae were still functional. The results suggest that, rather than attenuating renewal generally, PL inactivation specifically affects ABA renewal by reducing responding in the conditioning context. SIGNIFICANCE STATEMENT Extinguished operant behavior can recover (“renew”) when tested outside the extinction context. This suggests that behaviors, such as overeating or drug taking, might be especially prone to relapse following treatment. In rats, inactivation of the prelimbic cortex (PL) attenuates renewal. However, we report that PL inactivation after training attenuates responding in the context in which responding was acquired, but not in another one. A similar inactivation has no impact on renewal when testing occurs in a new, rather than the original, context following extinction. The PL thus has a more specific role in controlling contextually dependent operant behavior than has been previously reported.
Journal of the Experimental Analysis of Behavior | 2018
Sydney Trask; Christopher L. Keim; Mark E. Bouton
Two experiments investigated methods that reduce the resurgence of an extinguished behavior (R1) that occurs when reinforcement for an alternative behavior (R2) is discontinued. In Experiment 1, R1 was first trained and then extinguished while R2 was reinforced during a 5- or 25-session treatment phase. For half the rats, sessions in which R2 was reinforced alternated with sessions in which R2 was extinguished. Controls received the same number of treatment sessions, but R2 was never extinguished. When reinforcement for R2 was discontinued, R1 resurged in the controls. However, the alternating groups showed reduced resurgence, and the magnitude of the resurgences observed during their R2 extinction sessions decreased systematically over Phase 2. In Experiment 2, R1 was first reinforced with one outcome (O1). The rats then had two types of double-alternating treatment sessions. In one type, R1 was extinguished and R2 produced O2. In the other, R1 was unavailable and R2 produced O3. R1 resurgence was weakened when O2, but not O3, was delivered freely during testing. Together, the results suggest that methods that encourage generalization between R1 extinction and resurgence testing weaken the resurgence effect. They are not consistent with an account of resurgence proposed by Shahan and Craig (2017).
Learning & Behavior | 2018
Sydney Trask; Mark E. Bouton
Recent evidence from this laboratory suggests that a context switch after operant learning consistently results in a decrement in responding. One way to reduce this decrement is to train the response in multiple contexts. One interpretation of this result, rooted in stimulus sampling theory, is that conditioning of a greater number of common stimulus elements arising from more contexts causes better generalization to new contexts. An alternative explanation is that each change of context causes more effortful retrieval, and practice involving effortful retrieval results in learning that is better able to transfer to new situations. The current experiments were designed to differentiate between these two explanations for the first time in an animal learning and memory task. Experiment 1 demonstrated that the detrimental impact of a context change on an instrumental nose-poking response can be reduced by training the response in multiple contexts. Experiment 2 then found that a training procedure which inserted extended retention intervals between successive training sessions did not reduce the detrimental impact of a final context change. This occurred even though the inserted retention intervals had a detrimental impact on responding (and, thus, presumably retrieval) similar to the effect that context switches had in Experiment 1. Together, the results suggest that effortful retrieval practice may not be sufficient to reduce the negative impact of a context change on instrumental behavior. A common elements explanation which supposes that physical and temporal contextual cues do not overlap may account for the findings more readily.
Learning & Behavior | 2018
Sydney Trask
In resurgence, a target behavior (R1) is acquired in an initial phase and extinguished in a second phase while an R2 behavior is reinforced. When R2 is extinguished, R1 behavior can return or resurge. Two experiments tested the effectiveness of a potential retrieval cue associated with extinction in attenuating resurgence. Experiment 1 established that a 2-s cue paired with outcome delivery in Phase 2 can attenuate resurgence when presented during testing. This effect depended on the cue being associated with the outcome, and it occurred if the cue was delivered contingently or noncontingently on responding during testing. Pairing the cue with reinforcement might be necessary to maintain attention to it during Phase 2. Experiment 2 demonstrated that the cue must be experienced in sessions that also include R1 extinction and that it does not reduce resurgence through a conditioned reinforcement mechanism. The results suggest that previously neutral stimuli can attenuate resurgence if they are first paired with alternative reinforcement and presented in sessions in which R1 is extinguished. They build on existing literature that suggests enhancing generalization between extinction and testing reduces resurgence. The results may have implications for reducing relapse following interventions in humans such as contingency management (CM), in which participants can earn vouchers contingent upon drug abstinence. A cue associated with CM might help reduce this relapse.
Neurobiology of Learning and Memory | 2018
Megan L. Shipman; Sydney Trask; Mark E. Bouton; John T. Green
&NA; Several studies have examined a role for the prelimbic cortex (PL) and infralimbic cortex (IL) in free operant behavior. The general conclusion has been that PL controls goal‐directed actions (instrumental behaviors that are sensitive to reinforcer devaluation) whereas IL controls habits (instrumental behaviors that are not sensitive to reinforcer devaluation). To further examine the involvement of these regions in the expression of instrumental behavior, we first implanted male rats with bilateral guide cannulae into their PL, then trained two responses to produce a sucrose pellet reinforcer, R1 and R2, each in a distinct context. R1 received extensive training and R2 received minimal training. Rats then received lithium chloride injections either paired or unpaired with sucrose pellets in both contexts until paired rats rejected all pellets. Following acquisition, in Experiment 1, rats received either an infusion of saline or baclofen/muscimol into the PL and were tested (in extinction) on both R1 and R2. In vehicle controls, both responses were goal‐directed actions, as indicated by their sensitivity to reinforcer devaluation. PL inactivation decreased expression of the minimally‐trained action without affecting expression of the extensively‐trained action. Experiment 2 utilized the same experimental design but with IL inactivation at test. The extensively‐trained response was again a goal‐directed action. However, now expression of the extensively‐trained goal‐directed action was suppressed by IL inactivation. The overall pattern of results suggests that the PL is involved in expression of minimally trained goal‐directed behavior while the IL is involved in expression of extensively trained goal‐directed behavior. This implies that the PL does not control all types of actions and the IL can control some types of actions. These results expand upon the traditional view that the PL controls action while the IL controls habit.