Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rh Raymond Cuijpers is active.

Publication


Featured researches published by Rh Raymond Cuijpers.


Experimental Brain Research | 2002

Illusions in action: consequences of inconsistent processing of spatial attributes

Jeroen B. J. Smeets; Eli Brenner; Denise D. J. de Grave; Rh Raymond Cuijpers

Abstract. Many authors have performed experiments in which subjects grasp objects in illusory surroundings. The vast majority of these studies report that illusions affect the maximum grip aperture less than they affect the perceived size. This observation has frequently been regarded as experimental evidence for separate visual systems for perception and action. In order to make this conclusion, one assumes that the grip aperture is based on a visual estimate of the objects size. We believe that it is not, and that this is why size illusions fail to influence grip aperture. Illusions generally do not affect all aspects of space perception in a consistent way, but mainly affect the perception of specific spatial attributes. This applies not only to object size, but also to other spatial attributes such as position, orientation, displacement, speed, and direction of motion. Whether an illusion influences the execution of a task will therefore depend on which spatial attributes are used rather than on whether the task is perceptual or motor. To evaluate whether illusions affect actions when they influence the relevant spatial attributes we review experimental results on various tasks with inconsistent spatial processing in mind. Doing so shows that many actions are susceptible to visual illusions. We argue that the frequently reported differential effect of illusions on perceptual judgements and goal-directed action is caused by failures to ensure that the same spatial attributes are used in the two tasks. Illusions only affect those aspects of a task that are based on the spatial attributes that are affected by the illusion.


Topics in Cognitive Science | 2009

Joint action: Neurocognitive mechanisms supporting human interaction

Harold Bekkering; Ellen R.A. de Bruijn; Rh Raymond Cuijpers; Roger D. Newman-Norlund; Hein T. van Schie; Ruud G. J. Meulenbroek

Humans are experts in cooperating with each other when trying to accomplish tasks they cannot achieve alone. Recent studies of joint action have shown that when performing tasks together people strongly rely on the neurocognitive mechanisms that they also use when performing actions individually, that is, they predict the consequences of their co-actors behavior through internal action simulation. Context-sensitive action monitoring and action selection processes, however, are relatively underrated but crucial ingredients of joint action. In the present paper, we try to correct the somewhat simplified view on joint action by reviewing recent studies of joint action simulation, monitoring, and selection while emphasizing the intricate interrelationships between these processes. We complement our review by defining the contours of a neurologically plausible computational framework of joint action.


Neural Networks | 2006

2006 Special issue: Goals and means in action observation: A computational approach

Rh Raymond Cuijpers; Hein T. van Schie; Mathieu Koppen; Wolfram Erlhagen; Harold Bekkering

Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actors behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the models architecture.


Brain Research | 2010

The role of inferior frontal and parietal areas in differentiating meaningful and meaningless object-directed actions

Roger D. Newman-Norlund; Hein T. van Schie; Marline E.C. van Hoek; Rh Raymond Cuijpers; Harold Bekkering

Over the past two decades single cell recordings in primates and neuroimaging experiments in humans have uncovered the key properties of visuo-motor mirror neurons located in monkey premotor cortex and parietal cortices as well as homologous areas in the human inferior frontal and inferior parietal cortices which presumably house neurons with similar response properties. One of the most interesting claims regarding the human mirror neuron system (MNS) is that its activity reflects high-level action understanding. If this was the case, one would expect signal in the MNS to differentiate between meaningful and meaningless actions. In the current experiment we tested this prediction using a novel paradigm. Functional magnetic resonance images were collected while participants viewed (i) short films of object-directed actions (ODAs) which were either semantically meaningful, i.e. a hand pressed a stapler or semantically meaningless, i.e. a foot pressed a stapler, (ii) short films of pantomimed actions and (iii) static pictures of objects. Consistent with the notion that the MNS represents high-level action understanding, meaningful and meaningless actions elicited BOLD signal differences at bilateral sites in the supramarginal gyrus (SMG) of the inferior parietal lobule (IPL) where we observed a double dissociation between BOLD response and meaningfullness of actions. Comparison of superadditive responses in the inferior frontal gyrus (IFG) and IPL (supramarginal) regions revealed differential contributions to action understanding. These data further specify the role of specific components of the MNS in understanding object-directed actions.


Journal of Mathematical Psychology | 2003

The metrics of visual and haptic space based on parallelity judgements

Rh Raymond Cuijpers; Astrid M. L. Kappers; Jan J. Koenderink

It has long been known that the visually perceived positions of objects, in short visual space, are distorted with respect to the physical positions. On the basis of the observation that equidistance-alleys lie outside parallel-alleys, Luneburg (Mathematical Analysis of Binocular Vision, Princeton University Press, Princeton, NJ, 1947) proposed that visual space is a Riemannian space of constant negative curvature. Luneburg used this observation, along with some additional assumptions, as an axiom and deduced the metric of visual space theoretically. Many researchers have tried to verify Luneburgs model experimentally, but the results are ambiguous. Moreover, many assumptions that Luneburg made were proved wrong in the literature. In this paper we will derive metric models for both visual and haptic space based directly on visual and haptic experiments involving a parallelity task. From the metric we make predictions for the egocentric distance, frontoparallel horopters, parallel- and equidistance-alleys and compare them to the literature. We also compare the metric structures of haptic and visual space. Both models provide a good description of the data from the parallelity tasks, even though the so-called oblique effect observed experimentally has not yet been incorporated.


Perception | 2000

Large Systematic Deviations in Visual Parallelism

Rh Raymond Cuijpers; Astrid M. L. Kappers; Jan J. Koenderink

The visual environment is distorted with respect to the physical environment. Luneburg [1947, Mathematical Analysis of Binocular Vision (Princeton, NJ: Princeton University Press)] assumed that visual space could be described by a Riemannian space of constant curvature. Such a space is described by a metric which defines the distance between any two points. It is uncertain, however, whether such a metric description is valid. Two experiments are reported in which subjects were asked to set two bars parallel to each other in a horizontal plane. The backdrop consisted of wrinkled black plastic sheeting, and the floor and ceiling were hidden by means of a horizontal aperture restricting the visual field of the subject vertically to 10 deg. We found that large deviations (of up to 40°) occur and that the deviations are proportional to the separation angle: on average, the proportion is 30%. These deviations occur for 30°, 60°, 120°, and 150° reference orientations, but not for 0° and 90° reference orientations; there the deviation is approximately 0° for most subjects. A Riemannian space of constant curvature, therefore, cannot be an adequate description. If it were, then the deviation between the orientation of the test and the reference bar would be independent of the reference orientation. Furthermore, we found that the results are independent of the distance of the bars from the subject, which suggests either that visual space has a zero mean curvature, or that the parallelity task is essentially a monocular task. The fact that the deviations vanish for a 0° and 90° orientation is reminiscent of the oblique effect reported in the literature. However, the ‘oblique effect’ reported here takes place in a horizontal plane at eye height, not in a frontoparallel plane.


international conference on social robotics | 2011

Making robots persuasive: the influence of combining persuasive strategies (gazing and gestures) by a storytelling robot on its persuasive power

Jrc Jaap Ham; R René Bokhorst; Rh Raymond Cuijpers; D David van der Pol; John-John Cabibihan

Social agency theory suggests that when an (artificial) agent combines persuasive strategies, its persuasive power increases. Therefore, we investigated whether a robot that uses two persuasive strategies is more persuasive than a robot that uses only one. Because in human face-to-face persuasion two crucial persuasive strategies are gazing and gestures, the current research investigated the combined and individual contribution of gestures and gazing on the persuasiveness of a storytelling robot. A robot told a persuasive story about the aversive consequences of lying to 48 participants. The robot used persuasive gestures (or not) and gazing (or not) to accompany this persuasive story. We assessed persuasiveness by asking participants to evaluate the lying individual in the story told by the robot. Results indicated that only gazing independently led to increased persuasiveness. Using persuasive gestures only led to increased persuasiveness when the robot combined it with (the persuasive strategy of) gazing. Without gazing, using persuasive gestures diminished robot persuasiveness. The implications of the current findings for theory and design of persuasive robots are discussed.


Attention Perception & Psychophysics | 2000

Investigation of visual space using an exocentric pointing task

Rh Raymond Cuijpers; Astrid M. L. Kappers; Jan J. Koenderink

Classically, it has been assumed that visual space can be represented by a metric. This means that the distance between points and the angle between lines can be uniquely defined. However, this assumption has never been tested. Also, measurements outdoors, where monocular cues are abundant, conflict with this model. This paper reports on two experiments in which the structure of visual space was investigated, using an exocentric pointing task. In the first experiment, we measured the influence of the separation between pointer and target and of the orientation of the stimuli with respect to the observer. This was done both monocularly and binocularly. It was found that the deviation of the pointer settings depended linearly on the orientation, indicating that visual space is anisotropic. The deviations for configurations that were symmetrical in the median plane were approximately the same, indicating that left/right symmetry was maintained. The results for monocular and binocular conditions were very different, which indicates that stereopsis was an important cue. In both conditions, there were large deviations from the veridical. In the second experiment, the relative distance of the pointer and the target with respect to the observer was varied in both the monocular and the binocular conditions. The relative distance turned out to be the main parameter for the ranges used (1–5 m). Any distance function must have an expanding and a compressing part in order to describe the data. In the binocular case, the results were much more consistent than in the monocular case and had a smaller standard deviation. Nevertheless, the systematic mispointings remained large. It can therefore be concluded that stereopsis improves space perception but does not improve veridicality.


Acta Psychologica | 2001

On the role of external reference frames on visual judgements of parallelity

Rh Raymond Cuijpers; Astrid M. L. Kappers; Jan J. Koenderink

In a previous study we found large systematic errors (up to 40 degrees) when subjects adjusted the orientation of a horizontal test bar until it appeared parallel to a horizontal reference bar, both bars rotating about their vertical axes. The deviations increased linearly with the separation angle but vanished when the orientation of the reference bar was either parallel or perpendicular to the median line. In order to test the assumption that external references caused these deviations to vanish, the same task was repeated in four different conditions: in the normal condition the horizontal aperture, formed by a cabin, and the facing wall of the room were frontoparallel to the subject; in the other conditions either the room, the cabin or both were oriented 30 degrees to the right with respect to the subject. It was found that, depending on the subject, the occurrence of the vanishing deviations covaried with the orientation of the cabin or the room. Evidently, subjects are influenced by the external references provided by the walls of the room and the sides of the cabin. The results indicate that a description of visual space by a Riemannian metric of constant curvature is not valid in a visual environment containing external references.


international conference on social robotics | 2011

Design of robust robotic proxemic behaviour

E Elena Torta; Rh Raymond Cuijpers; James F. Juola; D David van der Pol

Personal robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behaviour-based robotic architecture that, (1) allows the robot to navigate safely in a cluttered and dynamically changing domestic environment and (2) encodes embodied non-verbal interactions: the robot respects the users personal space by choosing the appropriate distance and direction of approach. The model of the personal space is derived from a human-robot psycho-physical study and it is described in a convenient mathematical form. The robots target location is dynamically inferred through the solution of a Bayesian filtering problem. The validation of the overall behavioural architecture shows that the robot is able to exhibit appropriate proxemic behaviour.

Collaboration


Dive into the Rh Raymond Cuijpers's collaboration.

Top Co-Authors

Avatar

E Elena Torta

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

James F. Juola

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

D David van der Pol

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan J. Koenderink

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Brenner

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harold Bekkering

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jrc Jaap Ham

Eindhoven University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge