Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Andersson is active.

Publication


Featured researches published by Richard Andersson.


Behavior Research Methods | 2013

The influence of calibration method and eye physiology on eyetracking data quality

Marcus Nyström; Richard Andersson; Kenneth Holmqvist; Joost van de Weijer

Recording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants’ eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.


Acta Psychologica | 2011

I see what you're saying: The integration of complex speech and scenes during language comprehension

Richard Andersson; Fernanda Ferreira; John M. Henderson

The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.


Behavior Research Methods | 2017

One algorithm to rule them all? : An evaluation and discussion of ten eye movement event-detection algorithms

Richard Andersson; Linnéa Larsson; Kenneth Holmqvist; Martin Stridh; Marcus Nyström

Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484–2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.


Biomedical Signal Processing and Control | 2015

Detection of fixations and smooth pursuit movements in high-speed eye-tracking data

Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh

A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals intoa sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentationwhile the latter two stages evaluate the characteristics of each such segment and reorganize the pre-liminary segments into fixations and smooth pursuit events. Five different performance measures arecalculated to investigate different aspects of the algorithm’s behavior. The algorithm is compared to thecurrent state-of-the-art (I-VDT and the algorithm in [11]), as well as to annotations by two experts. Theproposed algorithm performs considerably better (average Cohen’s kappa 0.42) than the I-VDT algorithm(average Cohen’s kappa 0.20) and the algorithm in [11] (average Cohen’s kappa 0.16), when comparedto the experts’ annotations. (Less)


Vision Research | 2016

Pupil size influences the eye-tracker signal during saccades

Marcus Nyström; Ignace T. C. Hooge; Richard Andersson

While it is known that scleral search coils-measuring the rotation of the eye globe--and modern, video based eye trackers-tracking the center of the pupil and the corneal reflection (CR)--produce signals with different properties, the mechanisms behind the differences are less investigated. We measure how the size of the pupil affects the eye-tracker signal recorded during saccades with a common pupil-CR eye-tracker. Eye movements were collected from four healthy participants and one person with an aphakic eye while performing self-paced, horizontal saccades at different levels of screen luminance and hence pupil size. Results show that pupil-, and gaze-signals, but not the CR-signal, are affected by the size of the pupil; changes in saccade peak velocities in the gaze signal of more than 30% were found. It is important to be aware of this pupil size dependent change when comparing fine grained oculomotor behavior across participants and conditions.


Vision Research | 2016

Why have microsaccades become larger? Investigating eye deformations and detection algorithms

Marcus Nyström; Dan Witzner Hansen; Richard Andersson; Ignace T. C. Hooge

The reported size of microsaccades is considerably larger today compared to the initial era of microsaccade studies during the 1950s and 1960s. We investigate whether this increase in size is related to the fact that the eye-trackers of today measure different ocular structures than the older techniques, and that the movements of these structures may differ during a microsaccade. In addition, we explore the impact such differences have on subsequent analyzes of the eye-tracker signals. In Experiment I, the movement of the pupil as well as the first and fourth Purkinje reflections were extracted from series of eye images recorded during a fixation task. Results show that the different ocular structures produce different microsaccade signatures. In Experiment II, we found that microsaccade amplitudes computed with a common detection algorithm were larger compared to those reported by two human experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye-trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades.


International Journal of Language & Communication Disorders | 2010

‘You sometimes get more than you ask for’: responses in referential communication between children and adolescents with cochlear implant and hearing peers

Olof Sandgren; Tina Ibertsson; Richard Andersson; Kristina Hansson; Birgitta Sahlén

BACKGROUND This study investigates responses to requests for clarification in conversations between children/adolescents with cochlear implant (CI) and normally hearing peers. Earlier studies have interpreted a more frequent use of requests of confirmation (yes/no interrogatives) in the CI group as a conversational strategy used to prevent communication breakdowns and control the development of the conversation. This study provides a continuation of this line of research, now focusing on responses to requests for clarification. AIMS The aim was to examine the type and distribution of responses to requests for clarification in a referential communication task. In addition, we analysed the compliance between the type of response and the type of request as a measure of mutual adaptation. METHODS & PROCEDURES Twenty-six conversational pairs aged 10-19 years participated: 13 pairs consisting of a child/adolescent with CI (CI) and a conversational partner (CIP); and 13 pairs consisting of a normally hearing control (NH) and a conversational partner (NHP). The pairs performed a referential communication task requiring the description of faces. All occurrences of requests for clarification and their responses in the dialogues were identified and categorized. We also analysed how the different types of requests and responses were combined and the type-conformity of the responses to requests for confirmation. OUTCOMES & RESULTS The results showed no significant group differences regarding type, distribution or type-conformity of responses. In all four groups (CI, CIP, NH and NHP), a discrepancy between the request and the response was found, indicating that the response provided information that was not explicitly requested. Requests for confirmation constituted 78-90% of the requests, whereas only 54-61% of responses were confirmations. Conversely, the proportion of requests for elaboration was 6-15%, whereas the proportion of elaborated responses was 34-40%. CONCLUSIONS & IMPLICATIONS The children/adolescents with CI contribute equally to the conversation regarding type and distribution of responses to requests for clarification. The frequent use of elaborated responses indicates common ground for the conversational partners and a shared understanding of the objective of the task. The context creates facilitative conditions, with positive interactional consequences. The results have implications for the design of intervention, where tasks such as this can be used to make children with CI more aware of the role of questioning strategies in interaction.


Journal of Speech Language and Hearing Research | 2014

Coordination of Gaze and Speech in Communication Between Children With Hearing Impairment and Normal-Hearing Peers

Olof Sandgren; Richard Andersson; Joost van de Weijer; Kristina Hansson; Birgitta Sahlén

PURPOSE To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. METHOD Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). RESULTS Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partners face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. CONCLUSIONS The results show an increased probability for listeners with HI to gaze at the speakers face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.


Behavior Research Methods | 2018

Is human classification by experienced untrained observers a gold standard in fixation detection

Ignace T. C. Hooge; Diederick C Niehorster; Marcus Nyström; Richard Andersson; Roy S. Hessels

Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen’s kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen’s kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).


Frontiers in Psychology | 2013

Impact of cognitive and linguistic ability on gaze behavior in children with hearing impairment

Olof Sandgren; Richard Andersson; Joost van de Weijer; Kristina Hansson; Birgitta Sahlén

In order to explore verbal–nonverbal integration, we investigated the influence of cognitive and linguistic ability on gaze behavior during spoken language conversation between children with mild-to-moderate hearing impairment (HI) and normal-hearing (NH) peers. Ten HI–NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Cox proportional hazards regression was used to model associations between performance on cognitive and linguistic tasks and the probability of gaze to the conversational partner’s face. Analyses compare the listeners in each dyad (HI: n = 10, mean age = 12; 6 years, SD = 2; 0, mean better ear pure-tone average 33.0 dB HL, SD = 7.8; NH: n = 10, mean age = 13; 7 years, SD = 1; 11). Group differences in gaze behavior – with HI gazing more to the conversational partner than NH – remained significant despite adjustment for ability on receptive grammar, expressive vocabulary, and complex working memory. Adjustment for phonological short term memory, as measured by non-word repetition, removed group differences, revealing an interaction between group membership and non-word repetition ability. Stratified analysis showed a twofold increase of the probability of gaze-to-partner for HI with low phonological short term memory capacity, and a decreased probability for HI with high capacity, as compared to NH peers. The results revealed differences in gaze behavior attributable to performance on a phonological short term memory task. Participants with HI and low phonological short term memory capacity showed a doubled probability of gaze to the conversational partner, indicative of a visual bias. The results stress the need to look beyond the HI in diagnostics and intervention. Acknowledgment of the finding requires clinical assessment of children with HI to be supported by tasks tapping phonological processing.

Collaboration


Dive into the Richard Andersson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge