Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takayuki Sakagami is active.

Publication


Featured researches published by Takayuki Sakagami.


Behavioural Processes | 2004

Resistance to change in goldfish

Takeharu Igaki; Takayuki Sakagami

Resistance to change has been studied in several species such as humans, rats, and pigeons. We conducted two experiments using goldfish as subjects to examine the generality of the findings on resistance to change in a phylogenetically more primitive species. In Experiment 1, five goldfish (Carassius auratus) were trained on two-component multiple schedules with different variable-interval schedules in effect. When responding was disrupted by presenting free food during intercomponent intervals or by extinction, resistance to change was greater in the component with the higher reinforcement rates. In Experiment 2, identical variable-interval schedules were presented in two multiple-schedule components, but in one of the components response-independent food was delivered concurrently according to variable-time schedules. Baseline response rates were the same for both components, which is inconsistent with previous findings with other species that the addition of response-independent food decreases response rates. However, response rates in the component with added response-independent food showed the greater resistance to change, which is similar to findings in other species. The convergence of these results across various species confirms the generality of the findings on resistance to change.


Journal of the Experimental Analysis of Behavior | 2008

ON LOSS AVERSION IN CAPUCHIN MONKEYS

Alan Silberberg; Peter G. Roma; Mary E. Huntsberry; Frederick R. Warren-Boulton; Takayuki Sakagami; Angela M. Ruggiero; Stephen J. Suomi

Chen, Lakshminarayanan, and Santos (2006) claim to show in three choice experiments that monkeys react rationally to price and wealth shocks, but, when faced with gambles, display hallmark, human-like biases that include loss aversion. We present three experiments with monkeys and humans consistent with a reinterpretation of their data that attributes their results not to loss aversion, but to differences between choice alternatives in delay of reinforcement.


Learning & Behavior | 2010

Concurrent VR VI schedules: Primacy of molar control of preference and molecular control of response rates

Takayuki Tanno; Alan Silberberg; Takayuki Sakagami

In the first condition in Experiment 1, 6 rats were exposed to concurrent variable ratio (VR) 30, variable interval (VI) 30-sec schedules. In the next two conditions, the subjects were exposed to concurrent VI VI schedules and concurrent tandem VI-differential-reinforcement-of-high-rate VI schedules. For the latter conditions, the overall and relative reinforcer rates equaled those in the first condition. Only minor differences appeared in time allocation (a molar measure) across conditions. However, local response rate differences (a molecular measure) appeared between schedule types consistently with the interresponse times these schedules reinforced. In Experiment 2, these findings reappeared when the prior experiment was replicated with 5 subjects, except that the VR schedule was replaced by a VI plus linear feedback schedule. These results suggest that within the context tested, the molar factor of relative reinforcement rate controls preference, whereas the molecular factor of the relation between interresponse times and reinforcer probability controls the local response rate.


Behavioural Processes | 2002

Self-control and impulsiveness with asynchronous presentation of reinforcement schedules

Taku Ishii; Takayuki Sakagami

IN DISCRETE TRIALS, PIGEONS WERE PRESENTED WITH TWO ALTERNATIVES: to wait for a larger reinforcer, or to respond and obtain a smaller reinforcer immediately. The choice of the former was defined as self-control, and the choice of the latter as impulsiveness. The stimulus that set the opportunity for an impulsive choice was presented after a set interval from the onset of the stimulus that signaled the waiting period. That interval increased or decreased from session to session so that the opportunity for an impulsive choice became available either more removed from or closer in time to the presentation of the larger reinforcer. In three separate conditions, the larger reinforcer was delivered according to either a fixed interval (FI) schedule, a fixed time (FT) schedule, or a differential reinforcement of other behavior (DRO) schedule. The results showed that impulsive choices increased as the opportunity for such a choice was more distant in time from presentation of the larger reinforcer. Although the schedule of the larger reinforcer affected the rate of response in the waiting period, the responses themselves had no effect on choice unless the responses postponed presentation of the larger reinforcer.


Journal of the Experimental Analysis of Behavior | 2014

Preference pulses induced by reinforcement

Yosuke Hachiga; Takayuki Sakagami; Alan Silberberg

Eight rats responded on concurrent Variable-Ratio 20 Extinction schedules for food reinforcement. The assignment of variable-ratio reinforcement to a left or right lever varied randomly following each reinforcer, and was cued by illumination of a stimulus light above that lever. Postreinforcement preference levels decreased substantially and reliably over time when the lever that just delivered reinforcement was now in extinction; however, if that lever was once again associated with variable ratio, this decrease in same-lever preference tended to be small, and for some subjects, not in evidence. The changes in preference level to the extinction lever were well described by a modified version of Killeen, Hanson, and Osbornes (1978) induction model. Consistent with this models attribution of preference change to induction, we attribute preference change in this report to a brief period of reinforcer-induced arousal that energizes responding to the lever that delivered the last reinforcer. After a few seconds, this induced responding diminishes, and the operant responding that remains comes under the control of the stimulus light cuing the lever providing variable-ratio reinforcement.


Journal of the Experimental Analysis of Behavior | 2012

DISCRIMINATION OF VARIABLE SCHEDULES IS CONTROLLED BY INTERRESPONSE TIMES PROXIMAL TO REINFORCEMENT

Takayuki Tanno; Alan Silberberg; Takayuki Sakagami

In Experiment 1, food-deprived rats responded to one of two schedules that were, with equal probability, associated with a sample lever. One schedule was always variable ratio, while the other schedule, depending on the trial within a session, was: (a) a variable-interval schedule; (b) a tandem variable-interval, differential-reinforcement-of-low-rate schedule; or (c) a tandem variable-interval, differential-reinforcement-of-high-rate schedule. Completion of a sample-lever schedule, which took approximately the same time regardless of schedule, presented two comparison levers, one associated with each sample-lever schedule. Pressing the comparison lever associated with the schedule just presented produced food, while pressing the other produced a blackout. Conditional-discrimination accuracy was related to the size of the difference in reinforced interresponse times and those that preceded it (predecessor interresponse times) between the variable-ratio and other comparison schedules. In Experiment 2, control by predecessor interresponse times was accentuated by requiring rats to discriminate between a variable-ratio schedule and a tandem schedule that required emission of a sequence of a long, then a short interresponse time in the tandems terminal schedule. These discrimination data are compatible with the copyist model from Tanno and Silberberg (2012) in which response rates are determined by the succession of interresponse times between reinforcers weighted so that each interresponse times role in rate determination diminishes exponentially as a function of its distance from reinforcement.


Frontiers in Psychology | 2015

The effect of gaze-contingent stimulus elimination on preference judgments

Masahiro Morii; Takayuki Sakagami

This study examined how stimulus elimination (SE) in a preference judgment task affects observers’ choices. Previous research suggests that biasing gaze toward one alternative can increase preference for it; this preference reciprocally promotes gaze bias. Shimojo et al. (2003) called this phenomenon the Gaze Cascade Effect. They showed that the likelihood that an observer’s gaze was directed toward their chosen alternative increased steadily until the moment of choosing. Therefore, we tested whether observers would prefer an alternative at which they had been gazing last if both alternatives were removed prior to the start of this rising gaze likelihood. To test this, we used a preference judgment task and controlled stimulus presentation based on gaze using an eye-tracking system. A pair of non-sensical figures was presented on the computer screen and both stimuli were eliminated while participants were still making their preference decision. The timing of the elimination differed between two experiments. In Experiment 1, after gazing at both stimuli one or more times, stimuli were removed when the participant’s gaze fell on one alternative, pre-selected as the target stimulus. There was no significant difference in the preference of the two alternatives. In Experiment 2, we did not predefine any target stimulus. After the participant gazed at both stimuli one or more times, both stimuli were eliminated when the participant next fixated on either. The likelihood of choosing the stimulus that was gazed at last (at the moment of elimination) was greater than chance. Results showed that controlling participants’ choices using gaze-contingent SE was impossible, but the different results between these two experiments suggest that participants decided which stimulus to choose during their first period of gazing at each alternative. Thus, we could predict participants’ choices by analyzing eye movement patterns at the moment of SE.


Behavioural Processes | 2015

The copyist model and the shaping view of reinforcement

Takayuki Tanno; Alan Silberberg; Takayuki Sakagami

The strengthening view of reinforcement attributes behavior change to changes in the response strength or the value of the reinforcer. In contrast, the shaping view explains behavior change as shaping different response units through differential reinforcement. In this paper, we evaluate how well these two views explain: (1) the response-rate difference between variable-ratio and variable-interval schedules that provide the same reinforcement rate; and (2) the phenomenon of matching in choice. The copyist model (Tanno and Silberberg, 2012) - a shaping-view account - can provided accurate predictions of these phenomena without a strengthening mechanism; however, the model has limitations. It cannot explain the relation between behavior change and stimulus control, reinforcer amount, and reinforcer quality. These relations seem easily explained by a strengthening view. Future work should be directed at a model which combine the strengths of these two types of accounts.


Behavior Analyst | 2016

The Other Shoe: An Early Operant Conditioning Chamber for Pigeons

Takayuki Sakagami; Kennon A. Lattal

We describe an early operant conditioning chamber fabricated by Harvard University instrument maker Ralph Gerbrands and shipped to Japan in 1952 in response to a request of Professor B. F. Skinner by Japanese psychologists. It is a rare example, perhaps the earliest still physically existing, of such a chamber for use with pigeons. Although the overall structure and many of the components are similar to contemporary pigeon chambers, several differences are noted and contrasted to evolutionary changes in this most important laboratory tool in the experimental analysis of behavior. The chamber also is testimony to the early internationalization of behavior analysis.


Journal of the Experimental Analysis of Behavior | 2015

Preference pulses and the win–stay, fix-and-sample model of choice

Yosuke Hachiga; Takayuki Sakagami; Alan Silberberg

Two groups of six rats each were trained to respond to two levers for a food reinforcer. One group was trained on concurrent variable-ratio 20 extinction schedules of reinforcement. The second group was trained on a concurrent variable-interval 27-s extinction schedule. In both groups, lever-schedule assignments changed randomly following reinforcement; a light cued the lever providing the next reinforcer. In the next condition, the light cue was removed and reinforcer assignment strictly alternated between levers. The next two conditions redetermined, in order, the first two conditions. Preference pulses, defined as a tendency for relative response rate to decline to the just-reinforced alternative with time since reinforcement, only appeared during the extinction schedule. Although the pulses functional form was well described by a reinforcer-induction equation, there was a large residual between actual data and a pulse-as-artifact simulation (McLean, Grace, Pitts, & Hughes, 2014) used to discern reinforcer-dependent contributions to pulsing. However, if that simulation was modified to include a win-stay tendency (a propensity to stay on the just-reinforced alternative), the residual was greatly reduced. Additional modifications of the parameter values of the pulse-as-artifact simulation enabled it to accommodate the present results as well as those it originally accommodated. In its revised form, this simulation was used to create a model that describes response runs to the preferred alternative as terminating probabilistically, and runs to the unpreferred alternative as punctate with occasional perseverative response runs. After reinforcement, choices are modeled as returning briefly to the lever location that had been just reinforced. This win-stay propensity is hypothesized as due to reinforcer induction.

Collaboration


Dive into the Takayuki Sakagami's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shun Fujimaki

Japan Society for the Promotion of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge