Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarah Cowie is active.

Publication


Featured researches published by Sarah Cowie.


Journal of the Experimental Analysis of Behavior | 2016

Control by reinforcers across time and space: A review of recent choice research.

Sarah Cowie; Michael Davison

Reinforcers affect behavior. A fundamental assumption has been that reinforcers strengthen the behavior they follow, and that this strengthening may be context-specific (stimulus control). Less frequently discussed, but just as evident, is the observation that reinforcers have discriminative properties that also guide behavior. We review findings from recent research that approaches choice using nontraditional procedures, with a particular focus on how choice is affected by reinforcers, by time since reinforcers, and by recent sequences of reinforcers. We also discuss how conclusions about these results are impacted by the choice of measurement level and display. Clearly, reinforcers as traditionally considered are conditionally phylogenetically important to animals. However, their effects on behavior may be solely discriminative, and contingent reinforcers may not strengthen behavior. Rather, phylogenetically important stimuli constitute a part of a correlated compound stimulus context consisting of stimuli arising from the organism, from behavior, and from physiologically detected environmental stimuli. Thus, the three-term contingency may be seen, along with organismic state, as a correlation of stimuli. We suggest that organisms may be seen as natural stimulus-correlation detectors so that behavioral change affects the overall correlation and directs the organism toward currently appetitive goals and away from potential aversive goals. As a general conclusion, both historical and recent choice research supports the idea that stimulus control, not reinforcer control, may be fundamental.


Behavioural Processes | 2013

On the joint control of preference by time and reinforcer-ratio variation

Michael Davison; Sarah Cowie; Douglas Elliffe

Five pigeons were trained in a procedure in which, with a specified probability, food was either available on a fixed-interval schedule on the left key, or on a variable-interval schedule on the right key. In Phase 1, we arranged, with a probability of 0.5, either a left-key fixed-interval schedule or a right-key variable-interval 30s, and varied the value of the fixed-interval schedule from 5s to 50s across 5 conditions. In Phase 2, we arranged either a left-key fixed-interval 20-s schedule or a right-key variable-interval 30-s schedule, and varied the probability of the fixed-interval schedule from 0.05 to 1.0 across 8 conditions. Phase 3 always arranged a fixed-interval schedule on the left key, and its value was varied over the same range as in Phase 1. In Phase 1, overall preference was generally toward the variable-interval schedule, preference following reinforcers was initially toward the variable-interval schedule, and maximum preference for the fixed-interval schedule generally occurred close to the arranged fixed-interval time, becoming relatively constant thereafter. In Phase 2, overall left-key preference followed the probability of the fixed-interval schedule, and maximum fixed-interval choice again occurred close to the fixed-interval time, except when the fixed-interval probability was 0.1 or less. The pattern of choice following reinforcers was similar to that in Phase 1, but the peak fixed-interval choice became more peaked with higher probabilities of the fixed interval. Phase 3 produced typical fixed-interval schedule responding. The results are discussed in terms of reinforcement effects, timing in the context of alternative reinforcers, and generalized matching. These results can be described by a quantitative model in which reinforcer rates obtained at times since the last reinforcer are distributed across time according to a Gaussian distribution with constant coefficient of variation before the fixed-interval schedule time, changing to extended choice controlled by extended reinforcer ratios beyond the fixed-interval time. The same model provides a good description of response rates on single fixed-interval schedules.


Journal of the Experimental Analysis of Behavior | 2014

A model for food and stimulus changes that signal time-based contingency changes.

Sarah Cowie; Michael Davison; Douglas Elliffe

When the availability of reinforcers depends on time since an event, time functions as a discriminative stimulus. Behavioral control by elapsed time is generally weak, but may be enhanced by added stimuli that act as additional time markers. The present paper assessed the effect of brief and continuous added stimuli on control by time-based changes in the reinforcer differential, using a procedure in which the local reinforcer ratio reversed at a fixed time after the most recent reinforcer delivery. Local choice was enhanced by the presentation of the brief stimuli, even when the stimulus change signalled only elapsed time, but not the local reinforcer ratio. The effect of the brief stimulus presentations on choice decreased as a function of time since the most recent stimulus change. We compared the ability of several versions of a model of local choice to describe these data. The data were best described by a model which assumed that error in discriminating the local reinforcer ratio arose from imprecise discrimination of reinforcers in both time and space, suggesting that timing behavior is controlled not only by discrimination elapsed time, but by discrimination of the reinforcer differential in time.


Learning & Behavior | 2017

Quantitative analysis of local-level resurgence

John Y. H. Bai; Sarah Cowie; Christopher A. Podlesnik

Resurgence is the recurrence of a previously reinforced and then extinguished behavior induced by the extinction of another more recently reinforced behavior. Resurgence provides insight into behavioral processes relevant to treatment relapse of a range of problem behaviors. Resurgence is typically studied across three phases: (1) reinforcement of a target response, (2) extinction of the target and concurrent reinforcement of an alternative response, and (3) extinction of the alternative response, resulting in the recurrence of target responding. Because each phase typically occurs successively and spans multiple sessions, extended time frames separate the training and resurgence of target responding. This study assessed resurgence more dynamically and throughout ongoing training in 6 pigeons. Baseline entailed 50-s trials of a free-operant psychophysical procedure, resembling Phases 1 and 2 of typical resurgence procedures. During the first 25 s, we reinforced target (left-key) responding but not alternative (right-key) responding; contingencies reversed during the second 25 s. Target and alternative responding followed the baseline reinforcement contingencies, with alternative responding replacing target responding across the 50 s. We observed resurgence of target responding during signaled and unsignaled probes that extended trial durations an additional 100 s in extinction. Furthermore, resurgence was greater and/or sooner when probes were signaled, suggesting an important role of discriminating transitions to extinction in resurgence. The data were well described by an extension of a stimulus-control model of discrimination that assumes resurgence is the result of generalization of obtained reinforcers across space and time. Therefore, the present findings introduce novel methods and quantitative analyses for assessing behavioral processes underlying resurgence.


Journal of the Experimental Analysis of Behavior | 2016

Does overall reinforcer rate affect discrimination of time‐based contingencies?

Sarah Cowie; Michael Davison; Luca Blumhardt; Douglas Elliffe

Overall reinforcer rate appears to affect choice. The mechanism for such an effect is uncertain, but may relate to reinforcer rate changing the discrimination of the relation between stimuli and reinforcers. We assessed whether a quantitative model based on a stimulus-control approach could be used to account for the effects of overall reinforcer rate on choice under changing time-based contingencies. On a two-key concurrent schedule, the likely availability of a reinforcer reversed when a fixed time had elapsed since the last reinforcer, and the overall reinforcer rate was varied across conditions. Changes in the overall reinforcer rate produced a change in response bias, and some indication of a change in discrimination. These changes in bias and discrimination always occurred quickly, usually within the first session of a condition. The stimulus-control approach provided an excellent account of the data, suggesting that changes in overall reinforcer rate affect choice because they alter the frequency of reinforcers obtained at different times, or in different stimulus contexts, and thus change the discriminated relation between stimuli and reinforcers. These findings support the notion that temporal and spatial discriminations can be understood in terms of discrimination of reinforcers across time and space.


Behavioural Processes | 2016

A model for discriminating reinforcers in time and space

Sarah Cowie; Michael Davison; Douglas Elliffe

Both the response-reinforcer and stimulus-reinforcer relation are important in discrimination learning; differential responding requires a minimum of two discriminably-different stimuli and two discriminably-different associated contingencies of reinforcement. When elapsed time is a discriminative stimulus for the likely availability of a reinforcer, choice over time may be modeled by an extension of the Davison and Nevin (1999) model that assumes that local choice strictly matches the effective local reinforcer ratio. The effective local reinforcer ratio may differ from the obtained local reinforcer ratio for two reasons: Because the animal inaccurately estimates times associated with obtained reinforcers, and thus incorrectly discriminates the stimulus-reinforcer relation across time; and because of error in discriminating the response-reinforcer relation. In choice-based timing tasks, the two responses are usually highly discriminable, and so the larger contributor to differences between the effective and obtained reinforcer ratio is error in discriminating the stimulus-reinforcer relation. Such error may be modeled either by redistributing the numbers of reinforcers obtained at each time across surrounding times, or by redistributing the ratio of reinforcers obtained at each time in the same way. We assessed the extent to which these two approaches to modeling discrimination of the stimulus-reinforcer relation could account for choice in a range of temporal-discrimination procedures. The version of the model that redistributed numbers of reinforcers accounted for more variance in the data. Further, this version provides an explanation for shifts in the point of subjective equality that occur as a result of changes in the local reinforcer rate. The inclusion of a parameter reflecting error in discriminating the response-reinforcer relation enhanced the ability of each version of the model to describe data. The ability of this class of model to account for a range of data suggests that timing, like other conditional discriminations, is choice under the joint discriminative control of elapsed time and differential reinforcement. Understanding the role of differential reinforcement is therefore critical to understanding control by elapsed time.


Journal of the Experimental Analysis of Behavior | 2017

Control by past and present stimuli depends on the discriminated reinforcer differential

Sarah Cowie; Michael Davison; Douglas Elliffe

The extent to which a stimulus exerts control over behavior depends largely on its informativeness. However, when reinforcers have discriminative properties, they often exert less control over behavior than do other less reliable stimuli such as elapsed time. We investigated why less reliable cues in the present often overshadow stimulus control by more reliable cues presented in the recent past, by manipulating the reliability and duration of stimulus presentations. Five pigeons worked on a modified concurrent schedule in which the location of the response that produced the last reinforcer was a discriminative stimulus for the likely time and location of the next reinforcer. In some conditions, either the location of the previous reinforcer, or the location of the next reinforcer, was signaled by a red key light. This stimulus was either Brief, occurring for 10 s starting a fixed time after the most recent reinforcer, or Extended, being present at all times between food deliveries. Brief and Extended stimuli that signaled the same information had a similar effect on choice when they were present, but control by Brief stimuli weakened as time since stimulus offset elapsed. Control was divided among stimuli in the present and recent past according to the apparent reliability of the information signaled about the next reinforcer. More reliable stimuli in the present degraded, but did not erase, control by less reliable stimuli presented in the recent past. Thus, we conclude that less reliable stimuli in the present control behavior to a greater degree than do more reliable stimuli in the recent past because these more reliable stimuli are forgotten, and hence their relation to the likely availability of food cannot be discriminated.


Journal of the Experimental Analysis of Behavior | 2018

Melioration revisited: a systematic replication of Vaughan (1981): Melioration Revisited: Replicating Vaughan (1981)

Vikki J. Bland; Sarah Cowie; Douglas Elliffe; Christopher A. Podlesnik

Organisms that behave so as to forfeit a relatively higher overall rate of reinforcement in favor of a relatively lower rate are said to engage in suboptimal choice. Suboptimal choice has been linked with maladaptive behavior in humans. Melioration theory offers one explanatory framework for suboptimal choice. Melioration theory suggests behavior is controlled by differences in local reinforcer rates between alternatives. Vaughan (1981) arranged two experimental conditions in which maximizing the overall rate of reinforcement required behavior that was compatible, or incompatible, with melioration. Vaughan found pigeons allocated more time to a locally richer alternative even when doing so resulted in suboptimal choice. However, Vaughan did not show whether these effects could systematically reverse and did not provide within-session data to show that choice across short time spans remains under the control of differences in local reinforcer rates. The present study used pigeons to replicate and extend Vaughans findings. We investigated shifts in overall- and within-session choice across repeated conditions, according to arranged local contingencies. Behavior systematically followed changes in local contingencies for most pigeons. Within-session data suggests that, providing differences in local reinforcer rates are discriminated, pigeons will allocate more time to a locally richer alternative, even if this leads to suboptimal choice. These findings facilitate the more confident use of similar procedures that investigate how melioration contributes to suboptimal choice.


Journal of the Experimental Analysis of Behavior | 2018

Does a negative discriminative stimulus function as a punishing consequence

Vikki J. Bland; Sarah Cowie; Douglas Elliffe; Christopher A. Podlesnik

The study and use of punishment in behavioral treatments has been constrained by ethical concerns. However, there remains a need to reduce harmful behavior unable to be reduced by differential-reinforcement procedures. We investigated whether response-contingent presentation of a negative discriminative stimulus previously correlated with an absence of reinforcers would punish behavior maintained by positive reinforcers. Across four conditions, pigeons were trained to discriminate between a positive discriminative stimulus (S+) signaling the presence of food, and a negative discriminative stimulus (S-) signaling the absence of food. Once learned, every five responses on average to the S+ produced S- for a duration of 1.5 s. S+ response rate decreased for a majority of pigeons when responses produced S-, compared to when they did not, or when a neutral control stimulus was presented. In Condition 5, choice between two concurrently presented S+ alternatives shifted away from the alternative producing S-, despite a 1:1 reinforcer ratio. Therefore, presenting contingent S- stimuli punishes operant behavior maintained on simple schedules and in choice situations. Development of negative discriminative stimuli as punishers of operant behavior could provide an effective approach to behavioral treatments for problem behavior and subverting suboptimal choices involved in addictions.


Behavioural Processes | 2018

Generalization of response patterns in a multiple peak procedure

Stephanie Gomes-Ng; Douglas Elliffe; Sarah Cowie

Stimulus generalization is typically assessed by analyzing overall response rates. Studies of generalization of response-rate patterns across time are less common, despite the ubiquitous nature of time and the strong temporal control over behavior in the natural world. Thus, we investigated generalization of response-rate patterns across time using a multiple peak procedure in pigeons. The frequency (fast or slow) at which the color of a keylight changed signaled a fixed-interval (FI) 5-s or 20-s schedule, counterbalanced across subjects. In peak trials, the frequency of keylight-color changes was varied. For the fast and slow training stimuli, response rates in peak trials were controlled by the arranged FI schedule value; they increased as the arranged reinforcer time approached, and decreased thereafter. Response-rate patterns to all test stimuli were similar to response-rate patterns to the slow training stimulus for all subjects. Thus, overall, strong generalization from the slow training stimulus to all test stimuli was evident, whereas there was little to no generalization from the fast training stimulus. These findings extend past research examining generalization of temporally controlled response-rate patterns, and provide a useful starting point for future investigations of generalization of fixed-interval responding. A thorough understanding of generalization processes requires analysis of dependent variables other than overall response rates, especially when responding is likely to be temporally controlled.

Collaboration


Dive into the Sarah Cowie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Landon

Auckland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge