Cordula Vesper
Central European University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cordula Vesper.
Experimental Brain Research | 2011
Cordula Vesper; Robrecht P. R. D. van der Wel; Günther Knoblich; Natalie Sebanz
Performing joint actions often requires precise temporal coordination of individual actions. The present study investigated how people coordinate their actions at discrete points in time when continuous or rhythmic information about others’ actions is not available. In particular, we tested the hypothesis that making oneself predictable is used as a coordination strategy. Pairs of participants were instructed to coordinate key presses in a two-choice reaction time task, either responding in synchrony (Experiments 1 and 2) or in close temporal succession (Experiment 3). Across all experiments, we found that coactors reduced the variability of their actions in the joint context compared with the same task performed individually. Correlation analyses indicated that the less variable the actions were, the better was interpersonal coordination. The relation between reduced variability and improved coordination performance was not observed when pairs of participants performed independent tasks next to each other without intending to coordinate. These findings support the claim that reducing variability is used as a coordination strategy to achieve predictability. Identifying coordination strategies contributes to the understanding of the mechanisms involved in real-time coordination.
Experimental Brain Research | 2014
Cordula Vesper; Michael J. Richardson
How is coordination achieved in asymmetric joint actions where co-actors have unequal access to task information? Pairs of participants performed a non-verbal tapping task with the goal of synchronizing taps to different targets. We tested whether ‘Leaders’ knowing the target locations would support ‘Followers’ without this information. Experiment 1 showed that Leaders tapped with higher amplitude that also scaled with specific target distance, thereby emphasizing differences between correct targets and possible alternatives. This strategic communication only occurred when Leaders’ movements were fully visible, but not when they were partially occluded. Full visual information between co-actors also resulted in higher and more stable behavioral coordination than partial vision. Experiment 2 showed that Leaders’ amplitude adaptation facilitated target prediction by independent Observers. We conclude that fully understanding joint action coordination requires both representational (i.e., strategic adaptation) and dynamical systems (i.e., behavioral coupling) accounts.
Cognition | 2016
Cordula Vesper; Laura Schmitz; Lou Safra; Natalie Sebanz; Günther Knoblich
Highlights • In joint action tasks, co-actors have a choice between different coordination mechanisms.• How joint actions are coordinated depends on shared perceptual information.• In the absence of shared visual information, co-actors rely on a general heuristic strategy.• When shared visual information is available, co-actors switch to a communicative strategy.
Frontiers in Psychology | 2017
Cordula Vesper; Ekaterina Abramova; Judith Bütepage; Francesca Ciardo; Benjamin Crossey; Alfred O. Effenberg; Dayana Hristova; April Karlinsky; Luke McEllin; Sari R. R. Nijssen; Laura Schmitz; Basil Wahn
In joint action, multiple people coordinate their actions to perform a task together. This often requires precise temporal and spatial coordination. How do co-actors achieve this? How do they coordinate their actions toward a shared task goal? Here, we provide an overview of the mental representations involved in joint action, discuss how co-actors share sensorimotor information and what general mechanisms support coordination with others. By deliberately extending the review to aspects such as the cultural context in which a joint action takes place, we pay tribute to the complex and variable nature of this social phenomenon.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Laura Schmitz; Cordula Vesper; Natalie Sebanz; Günther Knoblich
Previous research has demonstrated that humans tend to represent each other’s tasks even if no interpersonal coordination is required. The present study asked whether coactors in a joint action rely on task co-representation to achieve temporal coordination even if this implies increased movement effort for an unconstrained actor. Pairs of participants performed reaching movements back and forth between two targets, with the aim of synchronizing their landing times. One of the participants needed to move over an obstacle while the other had no obstacle. The results of four experiments showed that the participant without obstacle moved as if an obstacle was obstructing her way. Further amplifying the demands on interpersonal coordination led to a significant increase of this effect, indicating that unconstrained actors represented their coactor’s task constraint and adjusted their own actions accordingly, particularly under high coordination demands. The findings also showed that unconstrained actors represented the object property constraining their coactor’s movement rather than parameters of this movement. We conclude that joint action partners rely on task co-representation to achieve temporal coordination in a task with asymmetric task constraints.
Journal of Experimental Psychology: General | 2017
Cordula Vesper; Laura Schmitz; Günther Knoblich
In many joint actions, knowledge about the precise task to be performed is distributed asymmetrically such that one person has information that another person lacks. In such situations, interpersonal coordination can be achieved if the knowledgeable person modulates basic parameters of her goal-directed actions in a way that provides relevant information to the co-actor with incomplete task knowledge. Whereas such sensorimotor communication has frequently been shown for spatial parameters like movement amplitude, little is known about how co-actors use temporal parameters of their actions to establish communication. The current study investigated whether systematic modulations of action duration provide a sufficient basis for communication. The results of 3 experiments demonstrate that knowledgeable actors spontaneously and systematically adjusted the duration of their actions to communicate task-relevant information if the naïve co-actor could not access this information in other ways. The clearer the communicative signal was the higher was the benefit for the co-actor’s performance. Moreover, we provide evidence that knowledgeable actors have a preference to separate instrumental from communicative aspects of their action. Together, our findings suggest that generating and perceiving systematic deviations from the predicted duration of a goal-directed action can establish nonconventionalized forms of communication during joint action.
Frontiers in Computational Neuroscience | 2016
Robert Lowe; Alexander Almér; Gustaf Lindblad; Pierre Gander; John Michael; Cordula Vesper
Joint Action is typically described as social interaction that requires coordination among two or more co-actors in order to achieve a common goal. In this article, we put forward a hypothesis for the existence of a neural-computational mechanism of affective valuation that may be critically exploited in Joint Action. Such a mechanism would serve to facilitate coordination between co-actors permitting a reduction of required information. Our hypothesized affective mechanism provides a value function based implementation of Associative Two-Process (ATP) theory that entails the classification of external stimuli according to outcome expectancies. This approach has been used to describe animal and human action that concerns differential outcome expectancies. Until now it has not been applied to social interaction. We describe our Affective ATP model as applied to social learning consistent with an “extended common currency” perspective in the social neuroscience literature. We contrast this to an alternative mechanism that provides an example implementation of the so-called social-specific value perspective. In brief, our Social-Affective ATP mechanism builds upon established formalisms for reinforcement learning (temporal difference learning models) nuanced to accommodate expectations (consistent with ATP theory) and extended to integrate non-social and social cues for use in Joint Action.
Cognitive Science | 2018
Laura Schmitz; Cordula Vesper; Natalie Sebanz; Günther Knoblich
Abstract In the absence of pre‐established communicative conventions, people create novel communication systems to successfully coordinate their actions toward a joint goal. In this study, we address two types of such novel communication systems: sensorimotor communication, where the kinematics of instrumental actions are systematically modulated, versus symbolic communication. We ask which of the two systems co‐actors preferentially create when aiming to communicate about hidden object properties such as weight. The results of three experiments consistently show that actors who knew the weight of an object transmitted this weight information to their uninformed co‐actors by systematically modulating their instrumental actions, grasping objects of particular weights at particular heights. This preference for sensorimotor communication was reduced in a fourth experiment where co‐actors could communicate with weight‐related symbols. Our findings demonstrate that the use of sensorimotor communication extends beyond the communication of spatial locations to non‐spatial, hidden object properties.
Cognition | 2018
Laura Schmitz; Cordula Vesper; Natalie Sebanz; Günther Knoblich
Previous research has shown that people represent each other’s tasks and actions when acting together. However, less is known about how co-actors represent each other’s action sequences. Here, we asked whether co-actors represent the order of each other’s actions within an action sequence, or whether they merely represent the intended end state of a joint action together with their own contribution. In the present study, two co-actors concurrently performed action sequences composed of two actions. We predicted that if co-actors represent the order of each other’s actions, they should experience interference when the order of their actions differs. Supporting this prediction, the results of six experiments consistently showed that co-actors moved more slowly when performing the same actions in a different order compared to performing the same actions in the same order. In line with findings from bimanual movement tasks, our results indicate that interference can arise due to differences in movement parameters and due to differences in the perceptual characteristics of movement goals. The present findings extend previous research on co-representation, providing evidence that people represent not only the elements of another’s task, but also their temporal structure.
Frontiers in Psychology | 2017
Dominik Dötsch; Cordula Vesper; Anna Schubö
Activating action representations can modulate perceptual processing of action-relevant dimensions, indicative of a common-coding of perception and action. When two or more agents work together in joint action, individual agents often need to consider not only their own actions and their effects on the world, but also predict the actions of a co-acting partner. If in these situations the action of a partner is represented in a functionally equivalent way to the agent’s own actions, one may also expect interaction effects between action and perception across jointly acting individuals. The present study investigated whether the action of a co-acting partner may modulate an agent’s perception. The “performer” prepared a grasping or pointing movement toward a physical target while the “searcher” performed a visual search task. The performer’s planned action impaired the searcher’s perceptual performance when the search target dimension was relevant to the performer’s movement execution. These results demonstrate an action-induced modulation of perceptual processes across participants and indicate that agents represent their partner’s action by employing the same perceptual system they use to represent an own action. We suggest that task representations in joint action operate along multiple levels of a cross-brain predictive coding system, which provides agents with information about a partner’s actions when they coordinate to reach a common goal.