Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Agnieszka Wykowska is active.

Publication


Featured researches published by Agnieszka Wykowska.


Journal of Experimental Psychology: Human Perception and Performance | 2009

How You Move Is What You See: Action Planning Biases Selection in Visual Search

Agnieszka Wykowska; Anna Schubö; Bernhard Hommel

Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection of size targets while preparing for pointing facilitated detection of luminance targets. Following the Theory of Event Coding (Hommel, Müsseler, Aschersleben, & Prinz, 2001b), the authors suggest that perceptual dimensions may be intentionally weighted with respect to an intended action. More interesting, the action-related influences were observed only when participants searched for a predefined target. This implies that action-related weighting is not independent from task-relevance weighting. To account for our findings, the authors suggest an integrative model of visual search that incorporates input from action-planning processes.


PLOS ONE | 2012

I See What You Mean: How Attentional Selection Is Shaped by Ascribing Intentions to Others

Eva Wiese; Agnieszka Wykowska; Jan Zwickel; Hermann J. Müller

The ability to understand and predict others’ behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others’ behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner.


Brain Research | 2007

Detecting pop-out targets in contexts of varying homogeneity: investigating homogeneity coding with event-related brain potentials (ERPs).

Anna Schubö; Agnieszka Wykowska; Hermann J. Müller

Searching for a target among many distracting context elements might be an easy or a demanding task. Duncan and Humphreys (Duncan, J., Humphreys, G.W., 1989. Visual search and stimulus similarity. Psychol. Rev. 96, 433-458) showed that not only the target itself plays a role in the difficulty of target detection. Similarity among context elements and dissimilarity of target and context are two main factors also affecting search efficiency. Moreover, many studies have shown that search becomes particularly efficient with large set sizes and perfectly homogeneous context elements, presumably due to grouping processes involved in target-context segmentation. Especially N2p amplitude has been found to be modulated by the number of context elements and their homogeneity. The aim of the present study was to investigate the influence of context elements of different heterogeneities on search performance using event-related brain potentials (ERPs). Results showed that contexts with perfectly homogeneous elements were indeed special: they were most efficient in visual search and elicited a large N2p differential amplitude effect. Increasing context heterogeneity led to a decrease in search performance and a reduction in N2p differential amplitude. Reducing the number of context elements led to a marked performance decrease for random heterogeneous contexts but not for grouped heterogeneous contexts. Behavioral and N2p results delivered evidence (a) in favor of specific processing modes operating on different spatial scales (b) for the existence of homogeneity coding postulated by Duncan and Humphreys.


Journal of Cognitive Neuroscience | 2011

Irrelevant singletons in visual search do not capture attention but can produce nonspatial filtering costs

Agnieszka Wykowska; Anna Schubö

It is not clear how salient distractors affect visual processing. The debate concerning the issue of whether irrelevant salient items capture spatial attention [e.g., Theeuwes, J., Atchley, P., & Kramer, A. F. On the time course of top–down and bottom–up control of visual attention. In S. Monsell & J. Driver (Eds.), Attention and performance XVIII: Control of cognitive performance (pp. 105–124). Cambridge, MA: MIT Press, 2000] or produce only nonspatial interference in the form of, for example, filtering costs [Folk, Ch. L., & Remington, R. Top–down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445–465, 2006] has not yet been settled. The present ERP study examined deployment of attention in visual search displays that contained an additional irrelevant singleton. Display-locked N2pc showed that attention was allocated to the target and not to the irrelevant singleton. However, the onset of the N2pc to the target was delayed when the irrelevant singleton was presented in the opposite hemifield relative to the same hemifield. Thus, although attention was successfully focused on the target, the irrelevant singleton produced some interference resulting in a delayed allocation of attention to the target. A subsequent probe discrimination task allowed for locking ERPs to probe onsets and investigating the dynamics of sensory gain control for probes appearing at relevant (target) or irrelevant (singleton distractor) positions. Probe-locked P1 showed sensory gain for probes positioned at the target location but no such effect for irrelevant singletons in the additional singleton condition. Taken together, the present data support the claim that irrelevant singletons do not capture attention. If they produce any interference, it is rather due to nonspatial filtering costs.


Journal of Cognitive Neuroscience | 2010

On the temporal relation of top-down and bottom-up mechanisms during guidance of attention

Agnieszka Wykowska; Anna Schubö

Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom–up, saliency-driven capture and top–down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate how attention was allocated to particular items at different points in time. The dynamic interplay between bottom–up and top–down mechanisms was investigated with ERP methodology. ERPs locked to the search displays showed that top–down control needed time to develop. N2pc indicated allocation of attention to the target item and not to the irrelevant singleton. ERPs locked to probes revealed modulations in the P1 component reflecting top–down control of focal attention at the long SOA. Early bottom–up effects were observed in the error rates at the short SOA. Taken together, the present results show that the top–down mechanism takes time to guide focal attention to the relevant target item and that it is potent enough to limit bottom–up attentional capture.


Philosophical Transactions of the Royal Society B | 2016

Embodied artificial agents for understanding human social cognition

Agnieszka Wykowska; Thierry Chaminade; Gordon Cheng

In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’


Frontiers in Psychology | 2012

Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence

Agnieszka Wykowska; Anna Schubö

In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms.


Attention Perception & Psychophysics | 2011

Action-induced effects on perception depend neither on element-level nor on set-level similarity between stimulus and response sets

Agnieszka Wykowska; Bernhard Hommel; Anna Schubö

As was shown by Wykowska, Schubö, and Hommel (Journal of Experimental Psychology, Human Perception and Performance, 35, 1755–1769, 2009), action control can affect rather early perceptual processes in visual search: Although size pop-outs are detected faster when having prepared for a manual grasping action, luminance pop-outs benefit from preparing for a pointing action. In the present study, we demonstrate that this effect of action–target congruency does not rely on, or vary with, set-level similarity or element-level similarity between perception and action—two factors that play crucial roles in standard stimulus–response interactions and in models accounting for these interactions. This result suggests that action control biases perceptual processes in specific ways that go beyond standard stimulus–response compatibility effects and supports the idea that action–target congruency taps into a fundamental characteristic of human action control.


PLOS ONE | 2014

What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues

Eva Wiese; Agnieszka Wykowska; Hermann J. Müller

For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes.


International Journal of Social Robotics | 2014

Implications of robot actions for human perception: how do we represent actions of the observed robots?

Agnieszka Wykowska; Ryan Chellali; Mamun al-Amin; Hermann J. Müller

Social robotics aims at developing robots that are to assist humans in their daily lives. To achieve this aim, robots must act in a comprehensible and intuitive manner for humans. That is, humans should be able to cognitively represent robot actions easily, in terms of action goals and means to achieve them. This yields a question of how actions are represented in general. Based on ideomotor theories (Greenwald Psychol Rev 77:73–99, 1970) and accounts postulating common code between action and perception (Hommel et al. Behav Brain Sci 24:849–878, 2001) as well as empirical evidence (Wykowska et al. J Exp Psychol 35:1755–1769, 2009), we argue that action and perception domains are tightly linked in the human brain. The aim of the present study was to examine if robot actions would be represented similarly, and in consequence, elicit similar perceptual effects, as representing human actions. Our results showed that indeed robot actions elicited perceptual effects of the same kind as human actions, arguing in favor of that humans are capable of representing robot actions in a similar manner as human actions. Future research will aim at examining how much these representations depend on physical properties of the robot actor and its behavior.

Collaboration


Dive into the Agnieszka Wykowska's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eva Wiese

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesca Ciardo

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Hélène L. Gauchou

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cesco Willemse

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Chiara Bartolozzi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Cristina Becchio

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge