Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allen Neuringer is active.

Publication


Featured researches published by Allen Neuringer.


Science | 1969

Animals respond for food in the presence of free food.

Allen Neuringer

Pigeons pecked a response disk to gain access to grain rewards while identical grain was freely available from a cup within the experimental chamber. Similarly, rats pressed a lever for food pellets while free pellets were present. It is not necessary, therefore, to deprive an animal of food before it will engage in instrumental responding for food. Such responding can serve as its own motivation and reward.


Psychonomic Bulletin & Review | 2002

Operant variability: Evidence, functions, and theory

Allen Neuringer

Although responses are sometimes easy to predict, at other times responding seems highly variable, unpredictable, or even random. The inability to predict is generally attributed to ignorance of controlling variables, but this article is a review of research showing that the highest levels of behavioral variability may result from identifiable reinforcers contingent on such variability. That is, variability is an operant. Discriminative stimuli and reinforcers control it, resulting in low or high variability, depending on the contingencies. Schedule-of-reinforcement effects are orderly, and choosing to vary or repeat is lawfully governed by relative reinforcement frequencies. The operant nature of variability has important implications. For example, learning, exploring, creating, and problem solving may partly depend on it. Abnormal levels of variability, including those found in psychopathologies such as autism, depression, and attention deficit hyperactivity disorder, may be modified through reinforcement. Operant variability may also help to explain some of the unique attributes of voluntary action.


American Psychologist | 2004

Reinforced Variability in Animals and People: Implications for Adaptive Action.

Allen Neuringer

Although reinforcement often leads to repetitive, even stereotyped responding, that is not a necessary outcome. When it depends on variations, reinforcement results in responding that is diverse, novel, indeed unpredictable, with distributions sometimes approaching those of a random process. This article reviews evidence for the powerful and precise control by reinforcement over behavioral variability, evidence obtained from human and animal-model studies, and implications of such control. For example, reinforcement of variability facilitates learning of complex new responses, aids problem solving, and may contribute to creativity. Depression and autism are characterized by abnormally repetitive behaviors, but individuals afflicted with such psychopathologies can learn to vary their behaviors when reinforced for so doing. And reinforced variability may help to solve a basic puzzle concerning the nature of voluntary action.


Learning & Behavior | 1998

BEHAVIORAL VARIABILITY IS CONTROLLED BY DISCRIMINATIVE STIMULI

Justin Denney; Allen Neuringer

Previous research has demonstrated that behavioral variability can be modified by reinforcers contingent on it, but there has been no convincing evidence of discriminative stimulus control over such variability. We therefore rewarded 20 rats for variable response sequences in the presence of one stimulus and provided equal rewards independently of sequence variability in the presence of a second stimulus. We found that sequence variability was significantly higher during the first stimulus than during the second, with the greatest difference occurring immediately following onset of the stimuli. Removing the discriminative stimuli caused levels of variability to converge. These experiments provide strong evidence that behavioral variability can be controlled by discriminative stimuli, which may be important for general theories of operant behavior and their applications.


Learning & Behavior | 1993

Reinforced variation and selection

Allen Neuringer

Long-Evans rats were reinforced for generating variable sequences of four left (L) and right (R) leverpress responses. If the current sequence, for example, LLRR, differed from each of the preceding five sequences, then a food pellet was provided. Otherwise, there was a brief time-out. After the rats were generating a variety of sequences, one sequence was concurrently reinforced whenever it occurred, that is, an “always-reinforced” contingency was superimposed. The frequency of that sequence increased significantly from the variable baseline (Experiment 1). The “difficulty” of the always-reinforced sequence influenced the extent to which its frequency increased—“easy” sequences increased greatly, but “difficult” ones increased not at all (Experiment 2, and a replication with pigeons in Experiment 3). However, when reinforcement for baseline variability was systematically decreased while a difficult sequence was always reinforced, that sequence slowly emerged and attained relatively high levels (Experiments 4 and 5). Thus, a technique for training even highly improbable, or difficult-to-learn, behaviors is provided by concurrent reinforcement of variation and selection.


Psychonomic Bulletin & Review | 2002

Learning to vary and varying to learn.

Alicia Grunow; Allen Neuringer

We compared two sources of behavior variability: decreased levels of reinforcement and reinforcement contingent on variability itself. In Experiment 1, four groups of rats were reinforced for different levels of response-sequence variability: one group was reinforced for low variability, two groups were reinforced for intermediate levels, and one group was reinforced for very high variability. All of the groups experienced three different reinforcement frequencies for meeting their respective variability contingencies. Results showed that reinforcement contingencies controlled response variability more than did reinforcement frequencies. Experiment 2 showed that only those animals concurrently reinforced for high variability acquired a difficult-to-learn sequence; animals reinforced for low variability learned little or not at all. Variability was therefore controlled mainly by reinforcement contingencies, and learning increased as a function of levels of baseline variability. Knowledge of these relationships may be helpful to those who attempt to condition operant responses.


Psychological Science | 1993

Approximating Chaotic Behavior

Allen Neuringer; Cheryl Voss

Human subjects received feedback showing how closely their responses approximated the chaotic output of the logistic difference function. In Experiment 1, subjects generated analog responses by placing a pointer along a line. In Experiment 2, they generated digital responses in the form of three-digit numbers. In Experiment 3, feedback was sometimes provided and other times withheld. Responses came to approximate three defining characteristics of logistic chaos: Sequences were “noisy,” they were extremely sensitive to initial conditions, and lag 1 autocorrelation functions were parabolic in form. Chaos theory may describe some highly variable although precisely determined human behaviors.


Journal of Experimental Psychology: Animal Behavior Processes | 1996

Reinforced variability decreases with approach to reinforcers.

Colin Cherot; Aaron Jones; Allen Neuringer

Anticipation of rewards had different effects on operant variability than on operant repetition. We reinforced variable (VAR) response sequences in groups of rats and pigeons and repetitive (REP) response sequences in separate groups. A fixed number of variations or repetitions was required per food reinforcer (e.g., fixed-ratio 4). Although VAR contingencies resulted in high levels of variability and REP contingencies in high repetition, opposite patterns of performance accuracy were observed as rewards were approached. Likelihood of satisfying REP contingencies increased within the fixed ratio, whereas likelihood of satisfying VAR contingencies decreased. These opposite patterns of accuracy were also generated by conditioned reinforcing stimuli correlated with food. Constraints on variability by proximity to reinforcers may explain some detrimental effects of reward.


Learning & Behavior | 1990

Behavioral variability as a function of response topography and reinforcement contingency

Laura Morgan; Allen Neuringer

Long-Evans rats were reinforced for generating variable sequences of responses on two operanda. The current sequence of four left and right responses was required to differ from each of the previous five sequences. Variability under thisvary schedule was compared with that under ayoke control schedule where reinforcement was independent of the sequences. Three different response topographies were compared: two levers were pressed in one case, two keys pushed in another, and two wires pulled in a third. Both reinforcement contingency (vary vs. yoke) and response topography (leverpress, key push, and wire pull) significantly influenced sequence variability. As is the case for operant dimensions, behavioral variability is jointly controlled by reinforcement contingency and response topography.


Physiology & Behavior | 1994

Different effects of amphetamine on reinforced variations versus repetitions in spontaneously hypertensive rats (SHR)

Deborah M. Mook; Allen Neuringer

The spontaneously hypertensive rat (SHR) may serve as an animal model of human attention deficit hyperactivity disorder (ADHD). We compared performances of SHRs and Wistar-Kyoto normotensive controls rats (WKY) in two experiments. When rewarded for varying sequences of responses across two manipulanda, the SHRs were more likely to vary than the WKYs. On the other hand, when rewarded for repetitions of a small number of sequences, the WKYs were more likely to learn to repeat. Both of these results confirm previous findings. Injecting 0.75 mg/kg d-amphetamine facilitated learning by SHRs to repeat the required sequences, with amphetamine-injected SHRs learning as rapidly as saline-injected, control WKYs. On the other hand, amphetamine tended to increase variability in both strains when high levels of variations were required for reward, and to decrease it in both strains when low levels of variability were required. Thus, amphetamine may have different effects on reinforced repetitions vs. reinforced variations.

Collaboration


Dive into the Allen Neuringer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge