Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Saiki is active.

Publication


Featured researches published by Jun Saiki.


Vision Research | 2003

Spatiotemporal characteristics of dynamic feature binding in visual working memory

Jun Saiki

It has been proposed that visual working memory can hold a set of four to five coherent object representations. As a test of this proposal, I devised a paradigm called multiple object permanence tracking (MOPT) that measures memory for feature-location binding in dynamic situations. Observers were asked to detect any feature switch in the middle of a regular rotation of a pattern with multiple objects behind an occluder. The feature switch detection performance dramatically declined as the pattern rotation velocity increased. The behavioral evidence for the use of multiple color-shape conjunction was observed only when the objects were stationary. These results cast doubt on the view that the functional unit of visual working memory involves coherent object representation, where object features are tightly bound and dynamically updated.


Neural Networks | 2006

A neural network implementation of a saliency map model

Matthew de Brecht; Jun Saiki

The saliency map model proposed by Itti and Koch [Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489-1506] has been a popular model for explaining the guidance of visual attention using only bottom-up information. In this paper we expand Itti and Kochs model and propose how it could be implemented by neural networks with biologically realistic dynamics. In particular, we show that by incorporating synaptic depression into the model, network activity can be normalized and competition within the feature maps can be regulated in a biologically plausible manner. Furthermore, the dynamical nature of our model permits further analysis of the time course of saliency computation, and also allows the model to calculate saliency for dynamic visual scenes. In addition to explaining the high saliency of pop-out targets in visual search tasks, our model explains attentional grab by sudden-onset stimuli, which was not accounted for by previous models.


Neuroreport | 2008

Realignment of temporal simultaneity between vision and touch

Kohske Takahashi; Jun Saiki; Katsumi Watanabe

Adaptation to temporal asynchrony between senses (audiovisual and audiotactile) affects the subsequent simultaneity or temporal order judgment. Here, we investigated the effects of adaptation to temporal asynchrony between vision and touch. Participants experienced deformation of virtual objects with a fixed temporal lag between vision and touch. In subsequent trials, the visual and haptic stimuli were deformed with variable temporal lags, and the participants judged whether the stimuli became deformed simultaneously. The point of subjective simultaneity was shifted toward the adapted lag. No intermanual transfer of the adaptation effect was, however, found. These results indicate that the perceptual simultaneity between vision and touch is adaptive, and is determined separately for each hand.


BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision | 2002

Stochastic Guided Search Model for Search Asymmetries in Visual Search Tasks

Takahiko Koike; Jun Saiki

We propose a stochastic guided search model for search asymmetries. Traditional saliency-based search model cannot account for the search asymmetry. Search asymmetry is likely to reflect changes in relative saliency between a target and distractors by the switch of target and distractor. However, the traditional models with a deterministic WTA always direct attention to the most salient location, regardless of relative saliency. Thus variation of the saliency does not lead to the variation of search efficiency in the saliency-based search models. We show that the introduction of a stochastic WTA enables the saliency-based search model to cause the variation of the relative saliency to change search efficiency, due to stochastic shifts of attention. The proposed model can simulate asymmetries in visual search.


Attention Perception & Psychophysics | 2015

Task-irrelevant stimulus-reward association induces value-driven attentional capture

Chisato Mine; Jun Saiki

Rewards affect the deployment of visual attention in various situations. Evidence suggests that the stimulus associated with reward involuntary captures attention (value-driven attentional capture; VDAC). Recent studies report VDAC even when the reward-associated feature does not define the target (i.e., task-irrelevant). However, these studies did not conduct the test phase without reward, thus the effect may be qualitatively different from those in the previous studies. In the current study, we tested if task-irrelevant features induce VDAC even in the test phase with no reward. We used a flanker task during reward learning to create color-reward associations (training phase), and then tested the effect of color during visual search (test phase). Reward learning with no spatial uncertainty in the flanker task induced VDAC, even when reward signaling color was associated with both target and distractor (Experiments 1 and 2). In Experiment 3, a significant VDAC with a color for all letters indicated that target-distractor discrimination is not necessary for VDAC. Finally, a significant VDAC (Experiment 4) with color rectangular frames around the letters indicated binding reward-associated features to task-relevant letters is not necessary for VDAC. All these effects were obtained in the test phase without reward, thus VDAC in the current study is comparable to previous studies using target-defining features. These findings indicate that task-relevance is not a necessary condition for VDAC from reward-associated features, suggesting that reward-associated learning in VDAC is more indirect.


NeuroImage | 2005

Maintaining coherence of dynamic objects requires coordination of neural systems extended from anterior frontal to posterior parietal brain cortices

Toshihide Imaruoka; Jun Saiki; Satoru Miyauchi

Object representation in visual working memory enables humans to perceive a consistent visual world and must satisfy two attributes: coherence and dynamic updating. The present study measured brain activity using functional magnetic resonance imaging (fMRI) during the multiple object permanence tracking (MOPT) task, which requires observers to process simultaneously both coherence maintenance and dynamic updating of objects. Whole brain analysis revealed anterior and ventral parts of frontal area and dorsal frontoparietal activation during both object-moving and object-stationary conditions. Subsequent region-of-interest analyses in the anterior/ventral frontal and the dorsal frontoparietal regions revealed that these two systems engage the two different cognitive processes involved in the MOPT task, with coherency maintenance processed in the anterior/ventral frontal areas and spatial processing in the dorsal frontoparietal network. These results suggest that cooperation between these two systems underpins object representations in visual working memory.


Progress in Brain Research | 2002

Multiple-object permanence tracking: limitation in maintenance and transformation of perceptual objects.

Jun Saiki

Research on change blindness and transsaccadic memory revealed that a limited amount of information is retained across visual disruptions in visual working memory. It has been proposed that visual working memory can hold four to five coherent object representations. To investigate their maintenance and transformation in dynamic situations, I devised an experimental paradigm called multiple-object permanence tracking (MOPT) that measures memory for multiple feature-location bindings in dynamic situations. Observers were asked to detect any color switch in the middle of a regular rotation of a pattern with multiple colored disks behind an occluder. The color-switch detection performance dramatically declined as the pattern rotation velocity increased, and this effect of object motion was independent of the number of targets. The MOPT task with various shapes and colors showed that color-shape conjunctions are not available in the MOPT task. These results suggest that even completely predictable motion severely reduces our capacity of object representations, from four to only one or two.


Journal of Vision | 2012

Blindness to a simultaneous change of all elements in a scene, unless there is a change in summary statistics

Jun Saiki; Alex O. Holcombe

Sudden change of every object in a display is typically conspicuous. We find however that in the presence of a secondary task, with a display of moving dots, it can be difficult to detect a sudden change in color of all the dots. A field of 200 dots, half red and half green, half moving rightward and half moving leftward, gave the appearance of two surfaces. When all 200 dots simultaneously switched color between red and green, performance in detecting the switch was very poor. A key display characteristic was that the color proportions on each surface (summary statistics) were not affected by the color switch. When the color switch is accompanied by a change in these summary statistics, people perform well in detecting the switch, suggesting that the secondary task does not disrupt the availability of this statistical information. These findings suggest that when the change is missed, the old and new colors were represented, but the color-location pattern (binding of colors to locations) was not represented or not compared. Even after extended viewing, changes to the individual color-location pattern are not available, suggesting that the feeling of seeing these details is misleading.


PLOS ONE | 2012

Feature-specific encoding flexibility in visual working memory.

Aki Kondo; Jun Saiki

The current study examined selective encoding in visual working memory by systematically investigating interference from task-irrelevant features. The stimuli were objects defined by three features (color, shape, and location), and during a delay period, any of the features could switch between two objects. Additionally, single- and whole-probe trials were randomized within experimental blocks to investigate effects of memory retrieval. A series of relevant-feature switch detection tasks, where one feature was task-irrelevant, showed that interference from the task-irrelevant feature was only observed in the color-shape task, suggesting that color and shape information could be successfully filtered out, but location information could not, even when location was a task-irrelevant feature. Therefore, although location information is added to object representations independent of task demands in a relatively automatic manner, other features (e.g., color, shape) can be flexibly added to object representations.


Journal of Experimental Psychology: Human Perception and Performance | 2005

Visual search asymmetry with uncertain targets

Jun Saiki; Takahiko Koike; Kohske Takahashi; Tomoko Inoue

The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of the feature selection account using a singleton search task without a predefined target. Participants conducted a target-defined and a singleton search task with a circle (O) and a circle with a vertical bar (Q). Search asymmetry was observed in both tasks with almost identical magnitude. The results were not due to trial-by-trial feature selection, because search asymmetry persisted even when the target was completely unpredictable. Asymmetry in the singleton search was also observed with more complex stimuli, Kanji characters. These results suggest that feature selection is not necessary for search asymmetry, and they impose important constraints on current visual search theories.

Collaboration


Dive into the Jun Saiki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shohei Hidaka

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge