Marcus Nyström
Lund University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcus Nyström.
Behavior Research Methods | 2010
Marcus Nyström; Kenneth Holmqvist
Event detection is used to classify recorded gaze points into periods of fixation, saccade, smooth pursuit, blink, and noise. Although there is an overall consensus that current algorithms for event detection have serious flaws and that a de facto standard for event detection does not exist, surprisingly little work has been done to remedy this problem. We suggest a new velocity-based algorithm that takes several of the previously known limitations into account. Most important, the new algorithm identifies so-called glissades, a wobbling movement at the end of many saccades, as a separate class of eye movements. Part of the solution involves designing an adaptive velocity threshold that makes the event detection less sensitive to variations in noise level and the algorithm settings-free for the user. We demonstrate the performance of the new algorithm on eye movements recorded during reading and scene perception and compare it with two of the most commonly used algorithms today. Results show that, unlike the currently used algorithms, fixations, saccades, and glissades are robustly identified by the new algorithm. Using this algorithm, we found that glissades occur in about half of the saccades, during both reading and scene perception, and that they have an average duration close to 24 msec. Due to the high prevalence and long durations of glissades, we argue that researchers must actively choose whether to assign the glissades to saccades or fixations; the choice affects dependent variables such as fixation and saccade duration significantly. Current algorithms do not offer this choice, and their assignments of each glissade are largely arbitrary.
eye tracking research & application | 2012
Kenneth Holmqvist; Marcus Nyström; Fiona Mulvey
Data quality is essential to the validity of research results and to the quality of gaze interaction. We argue that the lack of standard measures for eye data quality makes several aspects of manufacturing and using eye trackers, as well as researching eye movements and vision, more difficult than necessary. Uncertainty regarding the comparability of research results is a considerable impediment to progress in the field. In this paper, we illustrate why data quality matters and review previous work on how eye data quality has been measured and reported. The goal is to achieve a common understanding of what data quality is and how it can be defined, measured, evaluated, and reported.
eye tracking research & application | 2010
Halszka Jarodzka; Kenneth Holmqvist; Marcus Nyström
A great need exists in many fields of eye-tracking research for a robust and general method for scanpath comparisons. Current measures either quantize scanpaths in space (string editing measures like the Levenshtein distance) or in time (measures based on attention maps). This paper proposes a new pairwise scanpath similarity measure. Unlike previous measures that either use AOI sequences or forgo temporal order, the new measure defines scanpaths as a series of geometric vectors and compares temporally aligned scanpaths across several dimensions: shape, fixation position, length, direction, and fixation duration. This approach offers more multifaceted insights to how similar two scanpaths are. Eight fictitious scanpath pairs are tested to elucidate the strengths of the new measure, both in itself and compared to two of the currently most popular measures - the Levenshtein distance and attention map correlation.
Behavior Research Methods | 2013
Marcus Nyström; Richard Andersson; Kenneth Holmqvist; Joost van de Weijer
Recording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants’ eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.
Behavior Research Methods | 2012
Richard Dewhurst; Marcus Nyström; Halszka Jarodzka; Tom Foulsham; Roger Johansson; Kenneth Holmqvist
Eye movement sequences—or scanpaths—vary depending on the stimulus characteristics and the task (Foulsham & Underwood Journal of Vision, 8(2), 6:1–17, 2008; Land, Mennie, & Rusted, Perception, 28, 1311–1328, 1999). Common methods for comparing scanpaths, however, are limited in their ability to capture both the spatial and temporal properties of which a scanpath consists. Here, we validated a new method for scanpath comparison based on geometric vectors, which compares scanpaths over multiple dimensions while retaining positional and sequential information (Jarodzka, Holmqvist, & Nyström, Symposium on Eye-Tracking Research and Applications (pp. 211–218), 2010). “MultiMatch” was tested in two experiments and pitted against ScanMatch (Cristino, Mathôt, Theeuwes, & Gilchrist, Behavior Research Methods, 42, 692–700, 2010), the most comprehensive adaptation of the popular Levenshtein method. In Experiment 1, we used synthetic data, demonstrating the greater sensitivity of MultiMatch to variations in spatial position. In Experiment 2, real eye movement recordings were taken from participants viewing sequences of dots, designed to elicit scanpath pairs with commonalities known to be problematic for algorithms (e.g., when one scanpath is shifted in locus or when fixations fall on either side of an AOI boundary). The results illustrate the advantages of a multidimensional approach, revealing how two scanpaths differ. For instance, if one scanpath is the reverse copy of another, the difference is in the direction but not the positions of fixations; or if a scanpath is scaled down, the difference is in the length of the saccadic vectors but not in the overall shape. As well as having enormous potential for any task in which consistency in eye movements is important (e.g., learning), MultiMatch is particularly relevant for “eye movements to nothing” in mental imagery and embodiment-of-cognition research, where satisfactory scanpath comparison algorithms are lacking.
IEEE Transactions on Biomedical Engineering | 2013
Linnéa Larsson; Marcus Nyström; Martin Stridh
A novel algorithm for detection of saccades and postsaccadic oscillations in the presence of smooth pursuit movements is proposed. The method combines saccade detection in the acceleration domain with specialized on- and offset criteria for saccades and postsaccadic oscillations. The performance of the algorithm is evaluated by comparing the detection results to those of an existing velocity-based adaptive algorithm and a manually annotated database. The results show that there is a good agreement between the events detected by the proposed algorithm and those in the annotated database with Cohens kappa around 0.8 for both a development and a test database. In conclusion, the proposed algorithm accurately detects saccades and postsaccadic oscillations as well as intervals of disturbances.
Vision Research | 2013
Marcus Nyström; Ignace T. C. Hooge; Kenneth Holmqvist
Current video eye trackers use information about the pupil center to estimate orientation and movement of the eye. While dual Purkinje eye trackers suffer from lens wobble and scleral search coils may be influenced by contact lens slippage directly after saccades, it is not known whether pupil-based eye trackers produces similar artifacts in the data. We recorded eye movements from participants making repetitive, horizontal saccades and compared the movement in the data with pupil- and iris movements extracted from the eye images. Results showed that post-saccadic instabilities clearly exist in data recorded with a pupil-based eye tracker. They also exhibit a high degree of reproducibility across saccades and within participants. While the recorded eye movement data correlated well with the movement of the pupil center, the iris center showed only little post-saccadic movement. This means that the pupil moves relative to the iris during post-saccadic eye movements, and that the eye movement data reflect pupil movement rather than eyeball rotation. Besides introducing inaccuracies and additional variability in the data, the pupil movement inside the eyeball influences the decision of when a saccade should end and the subsequent fixation should begin, and consequently higher order analyses based on fixations and saccades.
Behavior Research Methods | 2017
Richard Andersson; Linnéa Larsson; Kenneth Holmqvist; Martin Stridh; Marcus Nyström
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484–2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Biomedical Signal Processing and Control | 2015
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh
A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals intoa sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentationwhile the latter two stages evaluate the characteristics of each such segment and reorganize the pre-liminary segments into fixations and smooth pursuit events. Five different performance measures arecalculated to investigate different aspects of the algorithm’s behavior. The algorithm is compared to thecurrent state-of-the-art (I-VDT and the algorithm in [11]), as well as to annotations by two experts. Theproposed algorithm performs considerably better (average Cohen’s kappa 0.42) than the I-VDT algorithm(average Cohen’s kappa 0.20) and the algorithm in [11] (average Cohen’s kappa 0.16), when comparedto the experts’ annotations. (Less)
Vision Research | 2016
Marcus Nyström; Ignace T. C. Hooge; Richard Andersson
While it is known that scleral search coils-measuring the rotation of the eye globe--and modern, video based eye trackers-tracking the center of the pupil and the corneal reflection (CR)--produce signals with different properties, the mechanisms behind the differences are less investigated. We measure how the size of the pupil affects the eye-tracker signal recorded during saccades with a common pupil-CR eye-tracker. Eye movements were collected from four healthy participants and one person with an aphakic eye while performing self-paced, horizontal saccades at different levels of screen luminance and hence pupil size. Results show that pupil-, and gaze-signals, but not the CR-signal, are affected by the size of the pupil; changes in saccade peak velocities in the gaze signal of more than 30% were found. It is important to be aware of this pupil size dependent change when comparing fine grained oculomotor behavior across participants and conditions.