Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ikuma Adachi is active.

Publication


Featured researches published by Ikuma Adachi.


Animal Cognition | 2006

Dogs recall their owner's face upon hearing the owner's voice

Ikuma Adachi; Hiroko Kuwahata; Kazuo Fujita

We tested whether dogs have a cross-modal representation of human individuals. We presented domestic dogs with a photo of either the owners or a strangers face on the LCD monitor after playing back a voice of one of those persons. A voice and a face matched in half of the trials (Congruent condition) and mismatched in the other half (Incongruent condition). If our subjects activate visual images of the voice, their expectation would be contradicted in Incongruent condition. It would result in the subjects’ longer looking times in Incongruent condition than in Congruent condition. Our subject dogs looked longer at the visual stimulus in Incongruent condition than in Congruent condition. This suggests that dogs actively generate their internal representation of the owners face when they hear the owner calling them. This is the first demonstration that nonhuman animals do not merely associate auditory and visual stimuli but also actively generate a visual image from auditory information. Furthermore, our subject also looked at the visual stimulus longer in Incongruent condition in which the owners face followed an unfamiliar persons voice than in Congruent condition in which the owners face followed the owners voice. Generating a particular visual image in response to an unfamiliar voice should be difficult, and any expected images from the voice ought to be more obscure or less well defined than that of the owners. However, our subjects looked longer at the owners face in Incongruent condition than in Congruent condition. This may indicate that dogs may have predicted that it should not be the owner when they heard the unfamiliar persons voice.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans

Vera U. Ludwig; Ikuma Adachi; Tetsuro Matsuzawa

Humans share implicit preferences for certain cross-sensory combinations; for example, they consistently associate higher-pitched sounds with lighter colors, smaller size, and spikier shapes. In the condition of synesthesia, people may experience such cross-modal correspondences to a perceptual degree (e.g., literally seeing sounds). So far, no study has addressed the question whether nonhuman animals share cross-modal correspondences as well. To establish the evolutionary origins of cross-modal mappings, we tested whether chimpanzees (Pan troglodytes) also associate higher pitch with higher luminance. Thirty-three humans and six chimpanzees were required to classify black and white squares according to their color while hearing irrelevant background sounds that were either high-pitched or low-pitched. Both species performed better when the background sound was congruent (high-pitched for white, low-pitched for black) than when it was incongruent (low-pitched for white, high-pitched for black). An inherent tendency to pair high pitch with high luminance hence evolved before the human lineage split from that of chimpanzees. Rather than being a culturally learned or a linguistic phenomenon, this mapping constitutes a basic feature of the primate sensory system.


Journal of Comparative Psychology | 2010

Rhesus monkeys (Macaca mulatta) rapidly learn to select dominant individuals in videos of artificial social interactions between unfamiliar conspecifics.

Regina Paxton; Benjamin M. Basile; Ikuma Adachi; Wendy A. Suzuki; Mark E. Wilson; Robert R. Hampton

Social animals, such as primates, must behave appropriately in complex social situations such as dominance interactions. Learning dominance information through trial and error would be dangerous; therefore, cognitive mechanisms for rapid learning of dominance information by observation would be adaptive. We used a set of digitally edited artificial social interactions to examine whether rhesus monkeys (Macaca mulatta) can learn dominance relationships between unfamiliar conspecifics through observation. Our method allowed random assignment of stimulus monkeys to ranks in an artificial hierarchy, controlling for nonbehavioral cues that could indicate dominance. Subject monkeys watched videos depicting 1 stimulus monkey behaving dominantly toward another and were rewarded for selecting the dominant individual. Monkeys rapidly learned this discrimination across 5 behavior types in Experiment 1 and transferred performance to novel videos of new individuals in Experiment 2. In addition, subjects selected the dominant individual more often than expected by chance in probe videos containing no behavioral dominance information, indicating some retention of the relative dominance status of stimulus monkeys from training. Together, our results suggest that monkeys can learn dominance hierarchies through observation of third-party social interactions.


Animal Cognition | 2003

A Capuchin monkey (Cebus apella) recognizes when people do and do not know the location of food.

Hika Kuroshima; Kazuo Fujita; Ikuma Adachi; Kana Iwata; Akira Fuyuki

In a previous study, Kuroshima and colleagues demonstrated that capuchin monkeys (Cebus apella) learned to discriminate between a “knower” who inspected a box for food, and a “guesser” who did not. The aim of the present study was to specify whether the subjects learned a simple conditional discrimination or a causal relationship that seeing leads to knowing. In experiment 1, we introduced five types of novel containers to two subjects. Each container was of different shape and color. The subjects gradually learned to reach toward the container the knower suggested. In experiment 2, we diversified the behavior of the knower and the guesser. In experiment 3, in order to eliminate the possibility of discrimination based on differences in the magnitude and the complexity of two trainers, we equated their behaviors. One subject adapted to the novel behaviors of the knower and the guesser, successfully discriminating the two trainers. Thus this monkey clearly learned to use the inspecting action of the knower and the non-inspecting action of the guesser as a discriminative cue to recognize the baited container. This result suggests that one capuchin monkey learned to recognize the relationship between seeing and knowing.


PLOS ONE | 2011

Rhesus monkeys see who they hear: spontaneous cross-modal memory for familiar conspecifics.

Ikuma Adachi; Robert R. Hampton

Rhesus monkeys gather much of their knowledge of the social world through visual input and may preferentially represent this knowledge in the visual modality. Recognition of familiar faces is clearly advantageous, and the flexibility and utility of primate social memory would be greatly enhanced if visual memories could be accessed cross-modally either by visual or auditory stimulation. Such cross-modal access to visual memory would facilitate flexible retrieval of the knowledge necessary for adaptive social behavior. We tested whether rhesus monkeys have cross-modal access to visual memory for familiar conspecifics using a delayed matching-to-sample procedure. Monkeys learned visual matching of video clips of familiar individuals to photographs of those individuals, and generalized performance to novel videos. In crossmodal probe trials, coo-calls were played during the memory interval. The calls were either from the monkey just seen in the sample video clip or from a different familiar monkey. Even though the monkeys were trained exclusively in visual matching, the calls influenced choice by causing an increase in the proportion of errors to the picture of the monkey whose voice was heard on incongruent trials. This result demonstrates spontaneous cross-modal recognition. It also shows that viewing videos of familiar monkeys activates naturally formed memories of real monkeys, validating the use of video stimuli in studies of social cognition in monkeys.


Scientific Reports | 2013

Developmental processes in face perception

Christoph D. Dahl; Malte J. Rasch; Masaki Tomonaga; Ikuma Adachi

Understanding the developmental origins of face recognition has been the goal of many studies of various approaches. Contributions of experience-expectant mechanisms (early component), like perceptual narrowing, and lifetime experience (late component) to face processing remain elusive. By investigating captive chimpanzees of varying age, a rare case of a species with lifelong exposure to non-conspecific faces at distinctive levels of experience, we can disentangle developmental components in face recognition. We found an advantage in discriminating chimpanzee above human faces in young chimpanzees, reflecting a predominant contribution of an early component that drives the perceptual system towards the conspecific morphology, and an advantage for human above chimpanzee faces in old chimpanzees, reflecting a predominant late component that shapes the perceptual system along the critical dimensions of the face exposed to. We simulate the contribution of early and late components using computational modeling and mathematically describe the underlying functions.


Journal of Comparative Psychology | 2006

Performance of pigeons (Columba livia) on maze problems presented on the LCD screen : In search for preplanning ability in an avian species

Hiromitsu Miyata; Tomokazu Ushitani; Ikuma Adachi; Kazuo Fujita

The authors examined how pigeons (Columba livia) perform on 2-dimensional maze tasks on the LCD monitor and whether the pigeons preplan the solution before starting to solve the maze. After training 4 pigeons to move a red square (the target) to a blue square (the goal) by pecking, the authors exposed them to a variety of detour tasks having lines as a barrier. A preview phase was introduced, during which the pigeons were not allowed to peck at the monitor. Results of a set of experiments suggest that our pigeons successfully learned to solve these tasks, that they came to take an efficient strategy as the barriers became complex, and that they possibly preplan its solution, at least on familiar, well-practiced tasks.


Scientific Reports | 2013

The face inversion effect in non-human primates revisited - an investigation in chimpanzees ( Pan troglodytes )

Christoph D. Dahl; Malte J. Rasch; Masaki Tomonaga; Ikuma Adachi

Faces presented upside-down are harder to recognize than presented right-side up, an effect known as the face inversion effect. With inversion the perceptual processing of the spatial relationship among facial parts is disrupted. Previous literature indicates a face inversion effect in chimpanzees toward familiar and conspecific faces. Although these results are not inconsistent with findings from humans they have some controversy in their methodology. Here, we employed a delayed matching-to-sample task to test captive chimpanzees on discriminating chimpanzee and human faces. Their performances were deteriorated by inversion. More importantly, the discrimination deterioration was systematically different between the two age groups of chimpanzee participants, i.e. young chimpanzees showed a stronger inversion effect for chimpanzee than for human faces, while old chimpanzees showed a stronger inversion effect for human than for chimpanzee faces. We conclude that the face inversion effect in chimpanzees is modulated by the level of expertise of face processing.


Developmental Science | 2009

Plasticity of ability to form cross‐modal representations in infant Japanese macaques

Ikuma Adachi; Hiroko Kuwahata; Kazuo Fujita; Masaki Tomonaga; Tetsuro Matsuzawa

In a previous study, Adachi, Kuwahata, Fujita, Tomonaga & Matsuzawa demonstrated that infant Japanese macaques (Macaca fuscata) form cross-modal representations of conspecifics but not of humans. However, because the subjects in the experiment were raised in a large social group and had considerably less exposure to humans than to conspecifics, it was an open question whether their lack of cross-modal representation of humans simply reflected their lower levels of exposure to humans or was caused by some innate restrictions on the ability. To answer the question, we used the same procedure but tested infant Japanese macaques with more extensive experience of humans in daily life. Briefly, we presented monkeys with a photograph of either a monkey or a human face on an LCD monitor after playing a vocalization of one of these two species. The subjects looked at the monitor longer when a voice and a face were mismatched than when they were matched, irrespective of whether the preceding vocalization was a monkeys or a humans. This suggests that once monkeys have extensive experience with humans, they will form a cross-modal representation of humans as well as of conspecifics.


eLife | 2013

Conceptual metaphorical mapping in chimpanzees (Pan troglodytes)

Christoph D. Dahl; Ikuma Adachi

Conceptual metaphors are linguistic constructions. Such a metaphor is humans’ mental representation of social rank as a pyramidal-like structure. High-ranked individuals are represented in higher positions than low-ranked individuals. We show that conceptual metaphorical mapping between social rank and the representational domain exists in our closest evolutionary relatives, the chimpanzees. Chimpanzee participants were requested to discriminate face identities in a vertical arrangement. We found a modulation of response latencies by the rank of the presented individual and the position on the display: a high-ranked individual presented in the higher and a low-ranked individual in the lower position led to quicker identity discrimination than a high-ranked individual in the lower and a low-ranked individual in the higher position. Such a spatial representation of dominance hierarchy in chimpanzees suggests that a natural tendency to systematically map an abstract dimension exists in the common ancestor of humans and chimpanzees. DOI: http://dx.doi.org/10.7554/eLife.00932.001

Collaboration


Dive into the Ikuma Adachi's collaboration.

Top Co-Authors

Avatar

Masaki Tomonaga

Primate Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoko Imura

Niigata University of International and Information Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Malte J. Rasch

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar

Robert R. Hampton

Yerkes National Primate Research Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge