Victoria Yaneva
University of Wolverhampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Victoria Yaneva.
conference on computers and accessibility | 2015
Victoria Yaneva; Irina P. Temnikova; Ruslan Mitkov
People with Autism Spectrum Disorder (ASD) are known to experience difficulties in reading comprehension, as well as to have unusual attention patterns, which makes the development of user-centred tools for this population a challenging task. This paper presents the first study to use eye-tracking technology with ASD participants in order to evaluate text documents. Its aim is two-fold. First, it evaluates the use of images in texts and provides evidence of a significant difference in the attention patterns of participants with and without autism. Sets of two types of images, photographs and symbols, are compared to establish which ones are more useful to include in simple documents. Second, the study evaluates human-produced easy-read documents, as a gold standard for accessible documents, on 20 adults with autism. The results provide an understanding of the perceived level of difficulty of easy-read documents according to this population, as well as the preferences of autistic individuals in text presentation. The results are synthesized as set of guidelines for creating accessible text for autism.
Proceedings of the 14th Web for All Conference on The Future of Accessible Work | 2017
Sukru Eraslan; Victoria Yaneva; Yeliz Yesilada; Simon Harper
Elements related to cognitive disability are given lower priority in web accessibility guidelines due to limited understanding of the requirements of neurodiverse web users. Meanwhile, eye tracking has received a lot of interest in the accessibility community as a way to understand user behaviours. In this study, we combine results from information location tasks and eye tracking data to find out whether users with high-functioning autism experience barriers while using the web compared to users without autism. Our results show that such barriers exist and there is higher variance in the scanpaths of the participants with high-functioning autism while searching for the right answer within web pages.
Lecture Notes in Computer Science | 2017
Victoria Yaneva; Shiva Taslimipoor; Omid Rohanian; Le An Ha
Gaze data has been used to investigate the cognitive processing of certain types of formulaic language such as idioms and binominal phrases, however, very little is known about the online cognitive processing of multiword expressions. In this paper we use gaze features to compare the processing of verb - particle and verb - noun multiword expressions to control phrases of the same part-of-speech pattern. We also compare the gaze data for certain components of these expressions and the control phrases in order to find out whether these components are processed differently from the whole units. We provide results for both native and non-native speakers of English and we analyse the importance of the various gaze features for the purpose of this study. We discuss our findings in light of the E-Z model of reading.
Proceedings of the Internet of Accessible Things on | 2018
Victoria Yaneva; Le An Ha; Sukru Eraslan; Yeliz Yesilada; Ruslan Mitkov
The ASD diagnosis requires a long, elaborate, and expensive procedure, which is subjective and is currently restricted to behavioural, historical, and parent-report information. In this paper, we present an alternative way for detecting the condition based on the atypical visual-attention patterns of people with autism. We collect gaze data from two different kinds of tasks related to processing of information from web pages: Browsing and Searching. The gaze data is then used to train a machine learning classifier whose aim is to distinguish between participants with autism and a control group of participants without autism. In addition, we explore the effects of the type of the task performed, different approaches to defining the areas of interest, gender, visual complexity of the web pages and whether or not an area of interest contained the correct answer to a searching task. Our best-performing classifier achieved 0.75 classification accuracy for a combination of selected web pages using all gaze features. These preliminary results show that the differences in the way people with autism process web content could be used for the future development of serious games for autism screening. The gaze data, R code, visual stimuli and task descriptions are made freely available for replication purposes.
recent advances in natural language processing | 2017
Omid Rohanian; Shiva Taslimipoor; Victoria Yaneva; Le An Ha
In recent years gaze data has been increasingly used to improve and evaluate NLP models due to the fact that it carries information about the cognitive processing of linguistic phenomena. In this paper we conduct a preliminary study towards the automatic identification of multiword expressions based on gaze features from native and non-native speakers of English. We report comparisons between a part-of-speech (POS) and frequency baseline to: i) a prediction model based solely on gaze data and ii) a combined model of gaze data, POS and frequency. In spite of the challenging nature of the task, best performance was achieved by the latter. Furthermore, we explore how the type of gaze data (from native versus non-native speakers) affects the prediction, showing that data from the two groups is discriminative to an equal degree for the task. Finally, we show that late processing measures are more predictive than early ones, which is in line with previous research on idioms and other formulaic structures.
ACM Sigaccess Accessibility and Computing | 2018
Victoria Yaneva
People with autism consistently exhibit different attention-shifting patterns compared to neurotypical people. Research has shown that these differences can be successfully captured using eye tracking. In this paper, we summarise our recent research on using gaze data from web-related tasks to address two problems: improving web accessibility for people with autism and detecting autism automatically. We first examine the way a group of participants with autism and a control group process the visual information from web pages and provide empirical evidence of different visual searching strategies. We then use these differences in visual attention, to train a machine learning classifier which can successfully use the gaze data to distinguish between the two groups with an accuracy of 0.75. At the end of this paper we review the way forward to improving web accessibility and automatic autism detection, as well as the practical implications and alternatives for using eye tracking in these research areas.
meeting of the association for computational linguistics | 2013
Vlad Niculae; Victoria Yaneva
workshop on innovative use of nlp for building educational applications | 2017
Sanja Štajner; Victoria Yaneva; Ruslan Mitkov; Simone Paolo Ponzetto
recent advances in natural language processing | 2015
Victoria Yaneva
recent advances in natural language processing | 2015
Victoria Yaneva; Richard Evans