Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Cornelissen is active.

Publication


Featured researches published by Tim Cornelissen.


Behavior Research Methods | 2015

CancellationTools: All-in-one software for administration and analysis of cancellation tasks.

Edwin S. Dalmaijer; Stefan Van der Stigchel; Tanja C.W. Nijboer; Tim Cornelissen; Masud Husain

In a cancellation task, a participant is required to search for and cross out (“cancel”) targets, which are usually embedded among distractor stimuli. The number of cancelled targets and their location can be used to diagnose the neglect syndrome after stroke. In addition, the organization of search provides a potentially useful way to measure executive control over multitarget search. Although many useful cancellation measures have been introduced, most fail to make their way into research studies and clinical practice due to the practical difficulty of acquiring such parameters from traditional pen-and-paper measures. Here we present new, open-source software that is freely available to all. It allows researchers and clinicians to flexibly administer computerized cancellation tasks using stimuli of their choice, and to directly analyze the data in a convenient manner. The automated analysis suite provides output that includes almost all of the currently existing measures, as well as several new ones introduced here. All tasks can be performed using either a computer mouse or a touchscreen as an input device, and an online version of the task runtime is available for tablet devices. A summary of the results is produced in a single A4-sized PDF document, including high quality data visualizations. For research purposes, batch analysis of large datasets is possible. In sum, CancellationTools allows users to employ a flexible, computerized cancellation task, which provides extensive benefits and ease of use.


Behavior Research Methods | 2018

What to expect from your remote eye-tracker when participants are unrestrained

Diederick C Niehorster; Tim Cornelissen; Kenneth Holmqvist; Ignace T. C. Hooge; Roy S. Hessels

The marketing materials of remote eye-trackers suggest that data quality is invariant to the position and orientation of the participant as long as the eyes of the participant are within the eye-tracker’s headbox, the area where tracking is possible. As such, remote eye-trackers are marketed as allowing the reliable recording of gaze from participant groups that cannot be restrained, such as infants, schoolchildren and patients with muscular or brain disorders. Practical experience and previous research, however, tells us that eye-tracking data quality, e.g. the accuracy of the recorded gaze position and the amount of data loss, deteriorates (compared to well-trained participants in chinrests) when the participant is unrestrained and assumes a non-optimal pose in front of the eye-tracker. How then can researchers working with unrestrained participants choose an eye-tracker? Here we investigated the performance of five popular remote eye-trackers from EyeTribe, SMI, SR Research, and Tobii in a series of tasks where participants took on non-optimal poses. We report that the tested systems varied in the amount of data loss and systematic offsets observed during our tasks. The EyeLink and EyeTribe in particular had large problems. Furthermore, the Tobii eye-trackers reported data for two eyes when only one eye was visible to the eye-tracker. This study provides practical insight into how popular remote eye-trackers perform when recording from unrestrained participants. It furthermore provides a testing method for evaluating whether a tracker is suitable for studying a certain target population, and that manufacturers can use during the development of new eye-trackers.


Vision Research | 2015

The art of braking : Post saccadic oscillations in the eye tracker signal decrease with increasing saccade size

Ignace T. C. Hooge; Marcus Nyström; Tim Cornelissen; Kenneth Holmqvist

Recent research has shown that the pupil signal from video-based eye trackers contains post saccadic oscillations (PSOs). These reflect pupil motion relative to the limbus (Nyström, Hooge, & Holmqvist, 2013). More knowledge about video-based eye tracker signals is essential to allow comparison between the findings obtained from modern systems, and those of older eye tracking technologies (e.g. coils and measurement of the Dual Purkinje Image-DPI). We investigated PSOs in horizontal and vertical saccades of different sizes with two high quality video eye trackers. PSOs were very similar within observers, but not between observers. PSO amplitude decreased with increasing saccade size, and this effect was even stronger in vertical saccades; PSOs were almost absent in large vertical saccades. Based on this observation we conclude that the occurrence of PSOs is related to deceleration at the end of a saccade. That PSOs are saccade size dependent and idiosyncratic is a problem for algorithmic determination of saccade endings. Careful description of the eye tracker, its signal, and the procedure used to extract saccades is required to enable researchers to compare data from different eye trackers.


Behavior Research Methods | 2015

Qualitative tests of remote eyetracker recovery and performance during head rotation

Roy S. Hessels; Tim Cornelissen; Chantal Kemner; Ignace T. C. Hooge

What are the decision criteria for choosing an eyetracker? Often the choice is based on specifications by the manufacturer of the validity (accuracy) and reliability (precision) of measurements that can be achieved using a particular eyetracker. These specifications are mostly achieved under optimal conditions—for example, by using an artificial eye or trained participants fixed in a chinrest. Research, however, does not always take place in optimal conditions: For instance, when investigating eye movements in infants, school children, and patient groups with disorders such as attention-deficit hyperactivity disorder, it is practically impossible to restrict movement. We modeled movements often seen in infant research in two behaviors: (1) looking away from and back to the screen, to investigate eyetracker recovery, and (2) head orientations, to investigate eyetracker performance with nonoptimal orientations of the eyes. We investigated how eight eyetracking setups by three manufacturers (SMI, Tobii, and LC Technologies) coped with these modeled behaviors in adults. We report that the tested SMI eyetrackers dropped in sampling frequency when the eyes were not visible to the eyetracker, whereas the other systems did not, and discuss the potential consequences thereof. Furthermore, we report that the tested eyetrackers varied in their rates of data loss and systematic offsets during shifted head orientations. We conclude that (prospective) eye-movement researchers who cannot restrict movement or nonoptimal head orientations in their participants might benefit from testing their eyetracker in nonoptimal conditions. Additionally, researchers should be aware of the data loss and inaccuracies that might result from nonoptimal head orientations.


Attention Perception & Psychophysics | 2017

Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior

Tim Cornelissen; Melissa L.-H. Võ

People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics—let alone their semantic congruity—processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.


Canadian Journal of Experimental Psychology | 2017

Gaze Behavior to Faces during Dyadic Interaction

Roy S. Hessels; Tim Cornelissen; Ignace T. C. Hooge; Chantal Kemner

A long-standing hypothesis is that humans have a bias for fixating the eye region in the faces of others. Most studies have tested this hypothesis with static images or videos of faces, yet recent studies suggest that the use of such “nonresponsive” stimuli might overlook an influence of social context. The present study addressed whether the bias for fixating the eye region in faces would persist in a situation that allowed for social interaction. In Experiment 1, we demonstrate a setup in which a duo could engage in social interaction while their eye movements were recorded. Here, we show that there is a bias for fixating the eye region of a partner that is physically present. Moreover, we report that the time 1 partner in a duo spends looking at the eyes is a good predictor of how long the other partner looks at the eyes. In Experiment 2, we investigate whether participants attune to the level of eye contact instigated by a partner by having a confederate pose as one of the partners. The confederate was subsequently instructed to either fixate the eyes of the observer or scan the entire face. Gaze behaviour of the confederate did not affect gaze behaviour of the observers. We conclude that there is a bias to fixate the eyes when partners can engage in social interaction. In addition, the amount of time spent looking at the eyes is duo-dependent, but not easily manipulated by instructing the gaze behaviour of 1 partner. Une hypothèse de longue date veut que les humains aient tendance à fixer la région des yeux sur le visage d’autres personnes. La plupart des études ont testé cette hypothèse à l’aide d’images statiques ou de vidéos de visages. Or, les récentes études suggèrent que l’emploi de tels stimuli « non conformes » pourraient négliger l’influence du contexte social. La présente étude a cherché à savoir si la tendance à fixer la région des yeux sur le visage persisterait dans un contexte permettant une interaction sociale. Dans l’expérience 1, nous montrons un scénario dans lequel un duo pouvait s’engager dans une interaction sociale, au cours de laquelle leurs mouvements oculaires étaient enregistrés. Dans le cas présent, nous montrons que l’humain a tendance à fixer la région des yeux d’un interlocuteur qui est physiquement présent. Aussi, nous déclarons que que le temps passé par un interlocuteur d’un duo à fixer les yeux de l’autre est un bon indicateur de la durée pendant laquelle le deuxième fixera les yeux. Dans l’expérience 2, nous cherchions à savoir si les participants s’ajustent au niveau de contact visuel initié par un interlocuteur en demandant à un complice de jouer le rôle d’un interlocuteur. Le complice a reçu comme instructions de soit, fixer les yeux de l’observateur ou de balayer le visage entier. Le comportement de regard du complice n’a pas affecté le comportement de regard des observateurs. Nous en concluons que l’humain a effectivement tendance à fixer les yeux lorsque des interlocuteurs peuvent s’engager dans une interaction sociale. De plus, la durée de temps passée à fixer les yeux est duo-dépendante, mais pas facilement manipulée lorsqu’on dicte le comportement de regard d’un des deux interlocuteurs.


Journal of Experimental Psychopathology | 2017

Eye contact takes two – autistic and social anxiety traits predict gaze behavior in dyadic interaction

Roy S. Hessels; Gijs A. Holleman; Tim Cornelissen; Ignace T. C. Hooge; Chantal Kemner

Research on social impairments in psychopathology has relied heavily on the face processing literature. However, although many sub-systems of facial information processing are described, recent evidence suggests that generalizability of these findings to social settings may be limited. The main argument is that in social interaction, the content of faces is more dynamic and dependent on the interplay between interaction partners, than the content of a non-responsive face (e.g. pictures or videos) as portrayed in a typical experiment. The question beckons whether gaze atypicalities to non-responsive faces in certain disorders generalize to faces in interaction. In the present study, a dual eye-tracking setup capable of recording gaze with high resolution was used to investigate how gaze behavior in interaction is related to traits of Autism Spectrum Disorder (ASD), and Social Anxiety Disorder (SAD). As clinical ASD and SAD groups have exhibited deficiencies in reciprocal social behavior, traits of these two conditions were assessed in a general population. We report that gaze behavior in interaction of individuals scoring high on ASD and SAD traits corroborates hypotheses posed in typical face-processing research using non-responsive stimuli. Moreover, our findings on the relation between paired gaze states (when and how often pairs look at each other’s eyes simultaneously or alternately) and ASD and SAD traits bear resemblance to prevailing models in the ASD literature (the ‘gaze aversion’ model) and SAD literature (the ‘vigilant-avoidance’ model). Pair-based analyses of gaze may reveal behavioral patterns crucial to our understanding of ASD and SAD, and more general to our understanding of eye movements as social signals in interaction.


Behavior Research Methods | 2017

Real-time sharing of gaze data between multiple eye trackers: evaluation, tools, and advice

Marcus Nyström; Diederick C Niehorster; Tim Cornelissen; Henrik Garde

Technological advancements in combination with significant reductions in price have made it practically feasible to run experiments with multiple eye trackers. This enables new types of experiments with simultaneous recordings of eye movement data from several participants, which is of interest for researchers in, e.g., social and educational psychology. The Lund University Humanities Laboratory recently acquired 25 remote eye trackers, which are connected over a local wireless network. As a first step toward running experiments with this setup, demanding situations with real time sharing of gaze data were investigated in terms of network performance as well as clock and screen synchronization. Results show that data can be shared with a sufficiently low packet loss (0.1 %) and latency (M = 3 ms, MAD = 2 ms) across 8 eye trackers at a rate of 60 Hz. For a similar performance using 24 computers, the send rate needs to be reduced to 20 Hz. To help researchers conduct similar measurements on their own multi-eye-tracker setup, open source software written in Python and PsychoPy are provided. Part of the software contains a minimal working example to help researchers kick-start experiments with two or more eye trackers.


Scientific Reports | 2018

The role of scene summary statistics in object recognition

Tim Lauer; Tim Cornelissen; Dejan Draschkow; Verena Willenbockel; Melissa L.-H. Võ

Objects that are semantically related to the visual scene context are typically better recognized than unrelated objects. While context effects on object recognition are well studied, the question which particular visual information of an object’s surroundings modulates its semantic processing is still unresolved. Typically, one would expect contextual influences to arise from high-level, semantic components of a scene but what if even low-level features could modulate object processing? Here, we generated seemingly meaningless textures of real-world scenes, which preserved similar summary statistics but discarded spatial layout information. In Experiment 1, participants categorized such textures better than colour controls that lacked higher-order scene statistics while original scenes resulted in the highest performance. In Experiment 2, participants recognized briefly presented consistent objects on scenes significantly better than inconsistent objects, whereas on textures, consistent objects were recognized only slightly more accurately. In Experiment 3, we recorded event-related potentials and observed a pronounced mid-central negativity in the N300/N400 time windows for inconsistent relative to consistent objects on scenes. Critically, inconsistent objects on textures also triggered N300/N400 effects with a comparable time course, though less pronounced. Our results suggest that a scene’s low-level features contribute to the effective processing of objects in complex real-world environments.


Journal of Eye Movement Research | 2013

Properties of post-saccadic oscillations induced by eye trackers

Ignace T. C. Hooge; Marcus Nyström; Tim Cornelissen; Kenneth Holmqvist

Collaboration


Dive into the Tim Cornelissen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melissa L.-H. Võ

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Lauer

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge