Child-Computer Interaction: Recent Works, New Dataset, and Age Detection
Ruben Tolosana, Juan Carlos Ruiz-Garcia, Ruben Vera-Rodriguez, Jaime Herreros-Rodriguez, Sergio Romero-Tapiador, Aythami Morales, Julian Fierrez
11 Child-Computer Interaction: Recent Works,New Dataset, and Age Detection
Ruben Tolosana, Juan Carlos Ruiz-Garcia, Ruben Vera-Rodriguez, Jaime Herreros-Rodriguez,Sergio Romero-Tapiador, Aythami Morales, Julian Fierrez
Abstract —We overview recent research in Child-ComputerInteraction and describe our framework ChildCI intended for: i ) generating a better understanding of the cognitive and neu-romotor development of children while interacting with mobiledevices, and ii) enabling new applications in e-learning and e-health, among others. Our framework includes a new mobileapplication, specific data acquisition protocols, and a first releaseof the ChildCI dataset (ChildCIdb v1), which is planned to beextended yearly to enable longitudinal studies. In our frameworkchildren interact with a tablet device, using both a pen stylus andthe finger, performing different tasks that require different levelsof neuromotor and cognitive skills. ChildCIdb comprises morethan 400 children from 18 months to 8 years old, consideringtherefore the first three development stages of the Piaget’s theory.In addition, and as a demonstration of the potential of theChildCI framework, we include experimental results for oneof the many applications enabled by ChildCIdb: children agedetection based on device interaction. Different machine learningapproaches are evaluated, proposing a new set of 34 globalfeatures to automatically detect age groups, achieving accuracyresults over 90% and interesting findings in terms of the type offeatures more useful for this task. Index Terms —Child-Computer Interaction, Computer Vision,e-Health, e-Learning, ChildCIdb, Age Detection
I. I
NTRODUCTION C HILDREN are becoming one of the latest (and youngest)users of the technology based on touch interaction. Theyhave more and more access to mobile devices on a daily basis.This fact is demonstrated in recent studies of the literature [1],showing that over 75.6% of the children are exposed to mobiledevices between the age of 1 to 60 months. This aspect hasbeen exacerbated by the COVID-19 outbreak in 2020. With alarge percentage of the academic institutions around the worldnow in lockdown, virtual education has temporally replacedtraditional education to a very large extent using specific e-Learning mobile applications in which children interact withthem to improve their knowledge and skills [2], [3]. However,and despite the importance of the topic, the field of Child-Computer Interaction (CCI) is still in its infancy [4], [5].Our work aims at generating a better understanding ofthe way children interact with mobile devices during their
R. Tolosana, J.C. Ruiz-Garcia, R. Vera-Rodriguez, S. Romero-Tapiador,A. Morales and J. Fierrez are with the Biometrics and Data PatternAnalytics - BiDA Lab, Escuela Politecnica Superior, Universidad Au-tonoma de Madrid, 28049 Madrid, Spain (e-mail: [email protected];[email protected]; [email protected]; [email protected];[email protected]; julian.fi[email protected]).J. Herreros-Rodriguez is with the Hospital Universitario Infanta Leonor,28031 Madrid, Spain (e-mail: [email protected]). development process. Children undergo many different physi-ological and cognitive changes as they grow up, which reflectin the way they understand and interact with the environment.According to Piaget’s theory [6], there are four different stagesin the development of the children: i) Sensorimotor (from birthto 2 years old), focused mainly on the evolution of the motorcontrol such as fingers and gestures, and the acquisition ofknowledge through sensory experiences and manipulating ob-jects; ii) Preoperational (2-7 years), children are getting betterwith language and thinking, improving also their motor skills; iii) Concrete Operational (7-11 years), their thinking becomesmore logical and organized, but still very concrete; and iv)Formal Operational (adolescence to adulthood), they beginto think more about moral, philosophical, ethical, social, andpolitical issues that require theoretical and abstract reasoning.Currently, most studies in the field of CCI are focused on thePreoperational and Concrete Operational stages (2-11 years),pointing out that children’s touch interaction patterns aredifferent compared with adults [7]–[10]. As a result, differentguidelines should be considered for the proper design anddevelopment of children mobile applications, considering theirincipient physiological and cognitive abilities [11]–[14].In this article we present our framework named ChildCI,which is mainly focused on the understanding of CCI withapplications to e-Health [15] and e-Learning [16], amongothers. In particular, the present study introduces all the detailsregarding the design and development of a new child mobileapplication, the specific acquisition protocol considered, andthe first capturing session of the ChildCI dataset (ChildCIdb).In the scenario considered, children interact with a tabletdevice, using both a pen stylus and also the finger, performingdifferent tasks that require different levels of neuromotor andcognitive skills. Unlike most previous studies in the literature,our analysis considers the first three stages of the Piaget’stheory in order to perform an in-depth analysis of the childrendevelopment process. Additionally, ChildCI is an on-goingproject in which children will be captured in multiple sessionsalong their development process (from 18 months to 8 yearsold), being possible to extract very relevant insights. The maincontributions of this study are as follow: • An overview of recent works studying touch and stylusinteractions performed by children on screens, remarkingthe publicly available datasets for research in this areaand the improvements over them of our contributedChildCIdb. • Design and development of a novel child mobile appli-cation composed of 7 tests grouped in three different a r X i v : . [ c s . H C ] F e b categories (emotional state, touch, and stylus). Differentlevels of neuromotor and cognitive skills are required ineach test to measure the evolution of the children in eachPiaget’s stage. By doing so, we are able to connect thechildren’s performance on finger and stylus with theirlevel of cognitive and motor development according totheir age. • A first release of the new ChildCI dataset (ChildCIdbv1), which is planned to be extended yearly to enablelongitudinal studies. This is the largest publicly availabledataset to date for research in this area with 438 childrenin the ages from 18 months to 8 years old. In addition, thefollowing aspects are considered in the acquisition of thedataset: i) interaction with screens using both finger andpen stylus, ii) the emotional state of the children duringthe acquisition, iii) information regarding the previousexperience of the children with mobile devices, iv) thechildren’s grades at the school, and v) information regard-ing the attention-deficit/hyperactivity disorder (ADHD). • Experimental framework using machine learning tech-niques to demonstrate the research potential of our con-tributed ChildCIdb. In particular, we focus on the taskof children age group detection while colouring a tree(named Drawing Test). A new set of 34 global featuresare proposed to automatically detect the age group,achieving interesting insights.The remainder of the article is organised as follows. Sec. IIsummarises previous studies carried out in touch and stylusinteractions performed by children. Sec. III describes all thedetails of ChildCIdb, including the design and development ofthe mobile application, the specific acquisition protocol, andthe first capturing session. Sec. IV develops an experimentalframework using machine learning techniques and ChildCIdbfor the task of children age group detection. Finally, Sec. Vdraws the final conclusions and points out future work.II. R
ELATED W ORKS
Different studies have evaluated the interaction of the chil-dren with mobile devices. Table I shows a comparison of themost relevant studies in the literature ordered by the age of thesubjects, including information such as the number of childrenconsidered in the study, the type of acquisition tool, etc.The first thing we would like to highlight is the lackof publicly available datasets in the field. To the best ofour knowledge, the novel dataset presented in this study(ChildCIdb) is the only available dataset to date together withthe dataset presented in [13]. Therefore, ChildCIdb is one ofthe main contributions of this study, not only due to the largenumber of children captured (438), but also to other manyaspects such as the age of the children (from 18 months to8 years), acquisition tool (touch and stylus), emotional stateinformation, ADHD information, etc.Analysing the studies focused on the first stage of thePiaget’s theory (Sensorimotor, 0-2 years), to the best of ourknowledge the work presented by Crescenzi and Gran´e in [14]is the only one available. This is mainly produced due to https://github.com/BiDAlab/ChildCIdb v1 the difficulties when capturing data from children in that agerange (e.g., they sometimes do not want to play with themobile devices). The focus of their study was to analyse howchildren under 3 years old interact with mobile devices, usingcommercial apps related to drawing and colouring tasks. Theyconcluded that children under 3 adapt their gestures to thecontent of the apps and suggested that the use of app tools(e.g., color palette) may begin from 2 years.Many studies have focused on the second stage of thePiaget’s theory (Preoperational, 2-7 years), paying specialattention to the ability to perform gestures on multi-touchsurfaces. In [17], the authors proposed a set of 8 differenttasks to measure the ability of the children to perform gestures.They concluded that children in the age 2-3 are able to performsimple gestures such as tap and drag and drop but also othercomplex ones such as one- and two-finger rotation, scale upand down, etc. A similar research line was studied in [18],reviewing 100 touchscreen apps for preschoolers. In addition,the authors found that children above 3 are able to followin-app audio instructions and on-screen demonstrations.An interesting article in this line is the work presented byVatavu et al. in [13]. In that work the authors captured andreleased to the research community a dataset composed of 89children (3-6 years) and 30 young adults. They analysed theway children interact with mobile devices, showing significantimprovements in children’s touch performance as they growfrom 3 to 6 years. Also, the authors proposed different guide-lines for designing children applications. Similar conclusionshave been obtained in other studies in the literature [19], [22].Mobile devices have also been studied as a way to teachchildren, in particular through digital game-based learning(DGBL) applications. In [21], the authors investigated whetherDGBL can improve the creativity skills in preschool children(3-6 years). Nine different games were considered in the study,concluding that DGBL can potentially affect children’s abilityto develop creative skills and critical thinking, knowledgetransfer, acquisition of skills in digital experience, and apositive attitude toward learning. Similar conclusions wereextracted in [23] when asking children to solve puzzle games.Considering that children and adults typically use differentinteraction patterns on mobile devices, some studies haveproposed the development of automatic systems to detectage groups. This research line has many different potentialapplications, e.g., restrict the access to adult contents orservices such as on-line shopping. In [9], the authors presentedan automatic system able to detect children from adults withclassification rates over 96%. This detection system is based onthe combination of features based on neuromotor skills, tasktime, and accuracy. The dataset released in [13] was consideredin the experimental framework. In a related work, Acien etal. proposed an enhanced detection system including globalfeatures from touch interaction [10].Not only the screen interaction using the finger has beenstudied as a way to interact with mobile devices. Differentstudies have considered the stylus for the acquisition tool.In [20], Remi et al. studied the scribbling activities executed bychildren of 3-6 years. They considered the Sigma-Lognormalwriting generation model [9], [28] to analyse the motor skills, TABLE I: Comparison of different studies focused on the interaction of the children with mobile devices.
Study Age of Participants
Crescenzi and Gran´e (2019)[14] 14-33 Months 21 Finger No No NoNacher et al. (2015)[17] 2-3 Years 32 Finger No No NoHiniker et al. (2015)[18] 2-5 Years 34 Finger No No NoAbdul-Aziz (2013)[19] 2-12 Years 33 Finger No No NoVatavu et al. (2015)[13] 3-6 Years 89 Finger No No YesVera-Rodriguez et al. (2020)[9] 3-6 Years 89 Finger No No [13]Acien et al. (2019)[10] 3-6 Years 89 Finger No No [13]Remi et al. (2015)[20] 3-6 Years 60 Stylus No No NoBehnamnia et al. (2020)[21] 3-6 Years 7 Finger No No NoHussain et al. (2016)[22] 4-6 Years 10 Finger No No NoHuber et al. (2016)[23] 4-6 Years 50 Finger No No NoWoodward et al. (2016)[24] 5-10 Years 30 Finger No No NoNacher et al. (2018)[25] 5-10 Years 55 Finger No No NoTabatabaey-Mashadi et al. (2015)[26] 6-7 Years 178 Stylus No No NoAnthony et al. (2014)[12] 6-17 Years 44 Finger No No NoMcKnight and Cassidy (2010)[11] 7-10 Years 80 Finger/Stylus No No NoLaniel et al. (2020)[27] 8-11 Years 25 Stylus No Yes NoAnthony et al. (2016)[8] <
12 Years 24 Finger No No No
ChildCIdb (Present Study) 18 Months - 8 Years 438 Finger/Stylus Yes Yes Yes concluding that there are significant differences in the modelparameters between ages. Stylus has also been consideredin [26] to analyse the correlation between the performanceof polygonal shape drawing and the levels in handwritingperformance. The study revealed that there are details in thechildren’s drawing strategy highly related to the handwritingperformance. Recently, Laniel et al. proposed in [27] a newmeasure of fine motor skills, the Pen Stroke Test (PST) in orderto discriminate between children with and without attention-deficit/hyperactivity disorder (ADHD). This test is also basedon the parameters of the Sigma-Lognormal model, providingpreliminary evidences that the PST may be very useful fordetecting ADHD.Finally, we also include in Table I the description of ourChildCIdb presented in this study, for completeness.III. C
HILD CI DB D ESCRIPTION
A. Acquisition: Year 1
ChildCIdb is a novel Child-Computer Interaction dataset.This is an on-going dataset collected in collaboration withthe school GSD Las Suertes in Madrid, Spain. This article presents the first version of ChildCI dataset (ChildCIdb v1),which comprises one capturing session with 438 childrenin total in the ages from 18 months to 8 years, groupedin 8 different educational levels according to the Spanisheducation system. All parents provided informed consent fortheir child’s participation. Table II provides the statistics ofChildCIdb regarding the number of children associated toeach educational level, and also the gender and handednessinformation. As can be seen, the number of children capturedincreases with the educational level, being levels 2 and 3the levels with less subjects. As commented before, this isproduced due to: i) less children are grouped in the same class,and ii) the acquisition is usually more difficult as they are veryyoung. Regarding the gender statistics of the ChildCIdb, 50%of the children were male/female whereas for the handedness,84% were right-handed, although this factor is not completelydefined until they are 5 years old [29].In addition to the gender and handedness information,the following personal information was acquired during the https://github.com/BiDAlab/ChildCIdb v1 TABLE II: Statistics of the ChildCIdb dataset regarding the number of children associated to each educational level, and thegender and handedness information.
Educational Level
Total 438 219 219 369 48 17 4
TABLE III: Statistics of the emotional state analysis pereducational level for the ChildCIdb. DK/DA stands for “doesnot know/does not answer”.
Educational Level Happy Normal Sad DK/DA
Total 342 13 17 66 enrolment stage: i) date of birth, and if he/she is premature(gestation period of less than 37 weeks), ii) whether he/sheis a child with ADHD, iii) whether he/she has ever usedany mobile device before the acquisition, and iv) his/hereducational grades. All this information enriches the project,being possible to research in several interesting lines, e.g., isthere any relationship between the way children interact withthe devices and their grades? B. Yearly Acquisition Plan
ChildCIdb is planned to be extended yearly to enablelongitudinal studies. The same children considered in Child-CIdb v1 will be acquired as they grow up and move tothe different educational levels (from 18 months to 8 years).Therefore, future versions of ChildCIdb will be extended to: i) new children that are registered to the educational level2 of the school GSD Las Suertes in Madrid, Spain, and ii) new acquisition sessions for the children already captured inprevious versions of ChildCIdb (up to 8 years old). C. Software Application
An Android mobile application was implemented for theacquisition, which comprises 7 different tests grouped in 3main blocks: i) emotional state, ii) touch, and iii) stylus. Fig. 1shows some examples of the different interfaces designed inChildCI for each test, before and after their execution. As theparticipants are children, and keeping in mind they are not ableto focus on the task for a long time, we decided to developa brief and interactive acquisition application in order to keeptheir attention as much as possible in a limited amount of time. Thus, we decided to set up a maximum time for eachtest as indicated in Fig. 1, being 5 minutes the maximum timefor the complete acquisition. In case the child is not able tofinish each test in the maximum time set for it, the applicationautomatically moves to the next test.In the first block, the main goal is to capture the emotionalstate of the children before the beginning of the acquisition(Test 0 - Emotional State Self-Assessment Test). Three faceswith different colours and facial expressions were representedon the screen, asking the children to touch one accordingto their emotional state. Table III shows the statistics of theemotional state analysis per educational level for the firstversion of ChildCIdb. As can be seen, most children were ina good mood before the beginning of the acquisition (78%).It is also interesting to remark the high number of childrenbetween 1-3 years that did not provide any information abouttheir emotional state (DK/DA). This information will be alsovery interesting for the project to see any relationship betweenthe children mood and the way they interact with the devices.After the end of the first block related to emotional stateanalysis, it starts the second block focused on the analysis ofthe children motor and cognitive skills using their own fingeras a tool. This new block is indicated to the children throughan image example. The second block comprises 4 differenttests with different levels of difficulty to see the ability of thechildren to perform different hand gestures and movements.The maximum time of each test is 30 seconds. We describenext each of the tests: • Test 1 - Tap and Reaction Time: the goal is to touch onemole at a time in order to see the ability of the childrento perform tap gestures (gross motor skills) and theirreaction times. Once the mole is touched, it disappearsfrom that position and appears in another position of thescreen. In total, 6 different moles must be touched for theend of the test. Just a single finger is needed to completethe task. • Test 2 - Drag and Drop: the goal is to touch thecarrot and swipe it to the rabbit. This test is designedto see the ability of the children to perform drag anddrop gestures (fine motor skills). In order to facilitate thecomprehension of the test and motivate the children, anintermittent blue arrow is shown in the screen until thechildren touch the carrot. Just a single finger is neededto complete the task. • Test 3 - Zoom In: the goal is to enlarge the rabbit and
Enrolled Handwritten Character21 pre-aligned time functions
Enrolled SampleTestSample
Time 21 TimeFunctions21 TimeFunctions
Time-Functions Alignment
Block 1: Emotional State Analysis H i dden La y e r m e m o r y b l o cks H i dden La y e r m e m o r y b l o cks F eed - F o r w a r d NN La y e r S i g m o i d a c t i v a t i on H i dden La y e r m e m o r y b l o cks W C on c a t H i dden La y e r m e m o r y b l o cks AccessGrantedAccess Denied S enrolled S test TF enrolled TF test TF enrolled TF test Time-Aligned Recurrent Neural Networks
21 TimeFunctions21 TimeFunctions
Block 2: Touch AnalysisBlock 3: Stylus Analysis
Test 1 - Tap and Reaction Time: 30 seconds max. Test 2 - Drag and Drop: 30 seconds max. Test 3 - Zoom In: 30 seconds max. Test 4 - Zoom Out: 30 seconds max. Test 5 - Spiral Test: 30 seconds max. Test 6 - Drawing Test: 2 minutes max. Test 0 - Emotional State Self-Assessment Test: 10 seconds max.
Use the Stylus!Use the Finger!
Fig. 1: Examples of the different interfaces designed in ChildCI for each test, before and after their execution, including themaximum time set up in each of them. Three main acquisition blocks are considered: i) emotional state, ii) touch, and iii) stylus.Representative video recordings of the different educational levels are available at https://github.com/BiDAlab/ChildCIdb v1 put it inside the two red circles for a short time. Thistest is designed to: i) analyse the ability of the childrento perform scale-up (zoom-in) gestures, and ii) analysethe precision of the motor control of the children whentrying to put the rabbit inside the two red circles (finemotor skills). In order to facilitate the comprehension ofthe test, two intermittent outer arrows are depicted untilthe children touch the surface close to the rabbit. Therabbit can be only enlarged/shortened using two fingers.No displacement of the rabbit along the screen is allowedto facilitate the test, only the size of the rabbit can bemodified. • Test 4 - Zoom Out: the goal of this test is similar to Test3 but in this case children have to perform scale-down(zoom-out) gestures. Two fingers are needed to completethe test (fine motor skills).After the end of the second block related to the childrentouch analysis, it starts the third block aimed to analyse theability of the children using the pen stylus. This is indicated tothe children through an image example showing a pen stylus.This block comprises the following 2 tests: • Test 5 - Spiral Test: the goal of this test is to go acrossthe spiral, from the inner part to the outer part, trying tokeep it always in the area remarked in black colour. Oncethe children finish the test, they must press the button“Next” to move to the following test. The maximum timerset up for this test is 30 seconds. A similar version of thistest is widely used for the detection of Parkinson’s diseaseand movement disorders [30]. • Test 6 - Drawing Test: the goal of this test is to colour thetree in the best way possible. Once the children decidedto finish the test, they must press the button “Next”. Thislast test ends the acquisition. The maximum timer set upfor this test is 2 minutes.These tests are designed to investigate the cognitive andneuromotor skills of the children while performing actionswith their own fingers or using the pen stylus, and also analysetheir evolution with time. The research results that can beobtained by analysing ChildCI will be very valuable to betterunderstand the current skills of the children in this societydominated by mobile devices.
D. Acquisition Protocol
Currently, ChildCIdb comprises one acquisition session.The following principles were applied for the acquisition ofthe data: • The same tablet device (Samsung Galaxy Tab A 10.1)was considered during all the acquisition process in orderto avoid inter-device problems, e.g., different samplingfrequencies [31], [32]. • All children performed the same tests in the same order(from Test 0 to Test 6) regardless of their educationallevel. This will allow us to perform a fair evaluation ofthe children inside a specific educational level and alsobetween different ones. • No help was provided to the children apart from theinstructions indicated on the screen before the beginning of each test. For children under 3 years old, oral instruc-tions were also given following the conclusions extractedin [18]. • Children performed each test by themselves, without anyother help. • The acquisition was carried out inside the normal class,one at a time, and always with the child sitting far fromthe other children to avoid distractions, and with thedevice over a table. Children were allowed to move thedevice freely to feel comfortable. • The acquisition was controlled from a distance by asupervisor at all times in order to control the proper flowof the acquisition.IV. E
XAMPLE A PPLICATION : A GE D ETECTION
This section analyses quantitatively one of the many differ-ent potential applications of ChildCIdb. In particular, we focuson the popular task of children age group detection based onthe interaction with mobile devices [7], [9], [10], [33]. Dueto the large volume of information captured in ChildCIdb,we focus in this section only on the analysis of the Test6 (Drawing Test) based on the way children colour a tree.Fig. 2 shows some examples of the Drawing Test performedby different children age groups.The organisation of this section is as follows: Sec. IV-Adescribes the experimental protocol. Sec. IV-B describes theage group detection systems proposed in this study. Finally,Sec. IV-C provides the results achieved.
A. Experimental Protocol
The experimental protocol proposed in this study has beendesigned to detect three different groups of children: Group 1(children of educational levels 2 and 3, i.e., 1-3 years), Group2 (children of educational levels 4, 5, and 6, i.e., 3-6 years),and finally Group 3 (children of educational levels 7 and 8,i.e., 6-8 years).The current version of ChildCIdb (v1) has been dividedinto development (80%) and evaluation (20%) datasets, whichcomprise separate groups of subjects. The development datasetis used to train the age group detection systems whereas theevaluation dataset is finally used to test the trained systemson realistic conditions (new unseen subjects not used duringthe development stage). As the number of samples available inGroup 1 and 3 is less than the Group 2, the data augmentationtechnique SMOTE of Imbalanced-Learn toolbox was consid-ered only during the development stage to balance and bettertrain the models. For the final evaluation, only real samples ofChildCIdb are considered. To better estimate the skill of themachine learning models proposed, k -fold cross validation isused in this experimental framework with k =5. Final resultsprovide the average values of the 5 fold cross validation. B. Age Group Detection Systems
Different machine learning approaches are studied in thiswork. The proposed age group detection systems comprise https://imbalanced-learn.org/stable/ X C oo r d i n a t e Y C oo r d i n a t e P r e ss u r e Time (Seconds) V e l o c i t y X C oo r d i n a t e Y C oo r d i n a t e P r e ss u r e Time (Seconds) V e l o c i t y X C oo r d i n a t e Y C oo r d i n a t e P r e ss u r e Time (Seconds) V e l o c i t y Fig. 2: Examples of the Drawing Test performed by threedifferent children age groups: (top) 1 to 3 years, (middle)3 to 6 years, and (bottom) 6 to 8 years. Representativefull video recordings of the different groups are available athttps://github.com/BiDAlab/ChildCIdb v1 three main modules: feature extraction, feature selection, andclassification. The specific parameters of each approach areselected over the development dataset. Feature Extraction : a set of 148 global features areextracted for each acquisition. From the total features ex-tracted, 114 features are based on preliminary studies inthe field of Human-Computer Interaction (HCI) and relatedwith Time, Kinematic, Direction, Geometry, and Pressureinformation [34]–[36]. The remainder 34 features (denotedas Drawing features) are originally presented in this studyand designed for the specific Drawing Test (colouring a tree).Table IV describes this novel set of 34 global features, whichextracts relevant information such as the length of the drawingstrokes, and the number of times the children colour outsidethe margin of the tree, among many others. Feature Selection : the following approaches are studiedto select the most discriminative features from the total 148global features originally extracted: • Fisher Discriminant Ratio (FDR) : it measures the dis-criminative power of each independent global feature.The value increases with the inter-class variability anddecreases with the intra-class variability. In our experi-ments, we select the subset of global features whose FDRvalues are higher than 0.05. • Sequential Forward Floating Search (SFFS) : this algo-rithm aims to select the optimal feature subset for aspecific optimisation criteria while reducing the numberof possible combinations to be tested. Therefore, thisalgorithm offers a suboptimal solution as it does not takeinto account all possible feature combinations, althoughit does consider correlations between features, achievinghigh-accuracy results [37]. The specific implementationconsidered in this study is publicly available in MLx-tend . • Genetic Algorithm (GA) : this algorithm is inspired byCharles Darwin’s theory of natural evolution by relyingon biologically inspired operations such as mutation,crossover, and selection; and it finds good application forfeature selection in handwriting biometrics [38]. We con-sider the genetic algorithm originally presented in [39].This algorithm has been completely programmed in thisstudy from scratch, including aspects such as parallelexecution to speed up the feature selection process. Ourimplemented code using Python is publicly available inGitHub . In our experiments, we consider the followingparameters: random generations = 100, population = 200,crossover rate = 0.6, mutation rate = 0.05. Classification : different classifiers are studied in ourexperimental framework. All of them are publicly availablein Scikit-Learn . In addition, for each classifier, the optimalparameters are selected after an in-depth search over thedevelopment dataset using the class GridSearchCV of Scikit-Learn. http://rasbt.github.io/mlxtend/ https://github.com/BiDAlab/GeneticAlgorithm https://scikit-learn.org/stable/ TABLE IV: Novel set of 34 global features (denoted as Drawing features) proposed in this study for the task of colouring atree (Test 6 - Drawing Test). N stands for number and T for time. N (draw outside the tree margin) 2 N (pen-downs)3 N (time samples inside the tree margin) 4 N (time samples outside the tree margin)5 N max (pen-down time samples) 6 T max (pen-down)7 N min (pen-down time samples) 8 T min (pen-down)9 T mean (pen-down) 10 N max (pen-up time samples)11 T max (pen-up) 12 N min (pen-up time samples)13 T min (pen-up) 14 T mean (pen-up)15 Mean (X-coordinate spatial position) 16
Mean (Y-coordinate spatial position)17
Std (X-coordinate spatial position) 18
Std (Y-coordinate spatial position)19 N (changes in drawing direction) 20 Max (X-coordinate spatial position)21
Min (X-coordinate spatial position) 22
Max (Y-coordinate spatial position)23
Min (Y-coordinate spatial position) 24 End test before time? (Yes/No)25 T (drawing inside the tree margin) 26 T (drawing outside the tree margin)27 T (drawing) 28 T (not drawing)29 T (drawing inside the tree margin) / T (drawing) 30 T (drawing outside the tree margin) / T (drawing)31 T (drawing inside the tree margin) / T (drawing outside the tree margin) 32 T (drawing) / T (Test)33 Draw anything? (Yes/No) 34 N (time samples) • Naive Bayes (NB): this is a simple probabilistic classifierbased on Bayes’ theorem with the “naive” assumption ofconditional independence between every pair of featuresgiven the value of the class variable. • Logistic Regression (LR): this is a statistical classifierthat models the probability of a certain class usinglogistic functions. In our experiments, we consider L2regularisation. • K-Nearest Neighbours (K-NN): this is a non-parametricmethod in which an event is assigned to the class mostcommon among its k nearest neighbours. In our experi-ments the number of neighbours is 5, and the algorithmused to compute the nearest neighbours is BallTree. • Random Forest (RF): this is an ensemble learning methodthat fits a number of decision tree classifiers at trainingtime and outputs the class that is the mode of the classesof the individual trees. In our experiments, the numberof trees in the forest is 100, the maximum depth of thetree is 75, and the function to measure the quality of thesplit is gini. • AdaBoost (AB): it combines multiple “weak classifiers”into a single “strong classifier”. It begins by fitting aclassifier on the original dataset and then fits additionalcopies of the classifier on the same dataset but where theweights of incorrectly classified instances are adjustedsuch that subsequent classifiers focus more on difficultcases. We consider here the AdaBoost-SAMME approachpresented in [40] with 50 maximum number of estimators. • Support Vector Machines (SVM): this is a popular learn-ing algorithm that constructs a hyperplane or set ofhyperplanes in a high- or infinite-dimensional space thatbest separates the classes. In this case, we have selectedregularization with 0.1, polynomial kernel with degree 3and coefficient scaled. • Multi-Layer Perceptron (MLP): this is a class of feed-forward Artificial Neural Network (ANN). It consists ofthree or more layers (an input and an output layer withone or more hidden layers) of non-linear activation nodes.Each node is connected to every node in the following layer (fully-connected). In our study, we have consideredfour hidden layers with 100, 200, 200, and 100 neuronsfor each hidden layer, respectively. In addition, Adamoptimiser is considered with default parameters (learningrate of 0.001) and a loss function based on cross-entropy.
C. Experimental Results1)
Results : Table V shows the results achieved in terms ofage group classification Accuracy (%) over the final evaluationdataset of ChildCIdb for the different feature selection andclassification approaches considered.We first analyse the results achieved by each feature selec-tion technique. As can be seen, the algorithm SFFS providesthe best results in all cases (83.38% average accuracy), fol-lowed by the Genetic Algorithm (79.23% average accuracy).The FDR algorithm provides the worst average accuracyresults (73.00%). This seems to be produced because theFDR feature selection technique is based on the discriminativepower of each independent feature. No correlations betweenfeatures are considered in the selection process.Analysing the results achieved by each classification ap-proach, SVM, Random Forest, and MLP provide the bestresults with 90.45%, 88.69%, and 85.98% accuracies, respec-tively. Other simpler classifiers such as Naive Bayes and K-NNprovide much worse results (69.63% and 71.24% accuracies,respectively).Finally, we compare the results achieved with the state ofthe art. To the best of our knowledge, this is the first studythat focuses on the classification of children age groups (from18 months to 8 years) based on the interaction with mobiledevices. Previous studies were focused on a simpler task,i.e., classification between children (3-6 years) and adults [7],[9], [10], achieving in the best cases classification accuracyresults of 96.3%. Comparing that result achieved in a simplertask with the results achieved in the present study (accuracyresults over 90%), we can conclude that: i) good results areachieved, proving the soundness of the proposed age groupclassification systems, and ii) the possibility to distinguish withhigh-accuracy results between different children age groups. TABLE V: Results achieved in terms of age group classification Accuracy (%) over the final evaluation dataset of ChildCIdbfor the different feature selection and classification approaches considered in the experimental framework. We remark in boldthe best result achieved.
Feature Selection Naive Bayes Logistic Regression K-NN Random Forest AdaBoost SVM MLPFDR
SFFS GA Time
Kinematic
Direction
Geometry
Pressure
Drawing
FDR F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (Naive Bayes) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (Logistic Regression) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometr y Pressure
Drawing
SFFS (KNN) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (Random Forest) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (AdaBoost) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (SVM) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
SFFS (MLP) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (Naive Bayes) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (Logistic Regression) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (KNN) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (Random Forest) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (AdaBoost) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (SVM) F ea t u r e s S e l e c t ed Time Kinematic
Direction
Geometry
Pressure
Drawing
GA (MLP) F ea t u r e s S e l e c t ed Fig. 3: Number and type of global features selected for each machine learning approach studied. (Color image.) Time 14.7%Kinematic 18.5%Direction 11.2% Geometry20.3% Pressure11.1% Drawing24.2%
Most Used Types of Features
Fig. 4: Average percentage of features selected per category. Feature Selection Analysis : this section analyses thetype of global features selected for each of the machinelearning approaches studied. Fig. 3 shows the number ofglobal features selected for each category (Time, Kinematic,Direction, Geometry, Pressure, and Drawing) [34]. For theFDR feature selection approach, a single subset of features isselected for all classifiers as this feature selector is independentof the classifier. For the SFFS and GA approaches, a differentsubset of features is selected for each classifier and featureselection approach. In all cases, we select the optimal featurevector that provides the best accuracy results in development.Analysing the FDR approach, global features related tothe Kinematic, Geometry, and Drawing information are pre-dominant with percentages of 26.7%, 28.9%, and 28.9%,respectively. In total, 45 out of the 148 global features areselected in the optimal feature subset.Regarding the SFFS approach, the features selected are verydifferent depending on the specific classifier considered. Forexample, for simpler classifiers such as Naive Bayes, featuresrelated to the Time information are predominant (27.3%).Nevertheless, when we consider more sophisticated classifierssuch as SVM, Random Forest, and MLP, features related tothe Kinematic, Geometry, and Drawing information outweighother types of features, being able to better exploit the non-linearity of the features and to achieve higher accuracy results(SFFS (Naive Bayes) = 78.09% vs. SFFS (SVM) = 90.45%).On average, 64 global features are selected using the SFFSapproach. Similar trends are observed for the GA selectionapproach, with an average of 67 global features selected.Finally, we also include in Fig. 4 the average percentageof features selected per category. In general, we can see thatthe novel features related to the Drawing information are themost selected ones with an average 24.2%. This result provesthe success of the novel features designed in this study forthe task of children age group detection. Other features basedon the Geometry (20.3%) and Kinematic (18.5%) informationof the children while interacting with the devices are alsovery important to distinguish between different age groups.However, information related to the Direction and Pressureperformed by the children while colouring the tree seems not to be discriminative to distinguish between children agegroups. These results prove the existence of different patternsin the motor control process of the children with the agesuch as the velocity and acceleration while performing strokes.These insights also agree with the physiological and cognitivechanges across age discussed in Piaget’s theory [6].V. C ONCLUSIONS
This article has presented a preliminary study of ourframework named ChildCI, which is aimed at generatinga better understanding of Child-Computer Interactions withapplications to e-Health and e-Learning, among others.In particular, in this article we have presented all the detailsregarding the design and development of a new child mobileapplication, the specific acquisition protocol considered, andthe first capturing session of the ChildCI dataset (ChildCIdbv1), which is publicly available for research purposes . In thescenario considered, children interact with a tablet device, us-ing both the pen stylus and also the finger, performing differenttasks that require different levels of motor and cognitive skills.ChildCIdb v1 comprises over 400 children in the ages from 18months to 8 years, considering therefore the first three stagesof the motor and cognitive development of the Piaget’s theory.In addition, we have demonstrated the potential of Child-CIdb including experimental results for one of the manypossible applications: children age group detection. Differentmachine learning approaches have been studied, proposing anew set of 34 global features to automatically detect the agegroup, achieving accuracy results over 90% and interestingfindings in terms of the type of features more useful.Future work will be oriented to: i) extend ChildCIdb withmore participants and acquisition sessions, ii) analyse andimprove the accuracy of the children age group detectionsystems using the remaining tests of ChildCIdb not con-sidered in the present article, iii) study the application ofother feature and signal representations of the drawing andscreen interaction beyond the ones tested here with specialemphasis in recent deep learning methods [41], iv) developchild-independent interaction models for the different testfrom which child-dependent behaviours can be derived [42], v) correlate the interaction information with the meta-datastored in the dataset like learning outcomes and ADHD [43], vi) combine the information provided by the multiple testsusing information fusion methods [44], vii) exploit ChildCIdbin other research problems around e-Learning [16] and e-Health [15], [45], and viii) compare the insights achievedin ChildCI with previous studies focused on the traditionalchildren cognitive development based on Piaget’s theory [6].A CKNOWLEDGEMENTS
This work has been supported by projects: PRIMA (H2020-MSCA-ITN-2019-860315), TRESPASS-ETN (H2020-MSCA-ITN-2019-860813), BIBECA (MINECO/FEDER RTI2018-101248-B-I00), and Orange Labs. This is an on-going projectcarried out with the collaboration of the school GSD LasSuertes in Madrid, Spain. https://github.com/BiDAlab/ChildCIdb v1 R EFERENCES[1] A. O. Kılıc¸, E. Sari, H. Yucel, M. M. O˘guz, E. Polat, E. A. Acoglu,and S. Senel, “Exposure to and Use of Mobile Devices in ChildrenAged 1-60 Months,”
European Journal of Pediatrics , vol. 178, no. 2,pp. 221–227, 2019.[2] N. Kucirkova, C. Evertsen-Stanghelle, I. Studsrød, I. B. Jensen, andI. Størksen, “Lessons for Child-Computer Interaction Studies Followingthe Research Challenges During the COVID-19 Pandemic,”
Interna-tional Journal of Child-Computer Interaction , p. 100203, 2020.[3] A. N. Antle and C. Frauenberger, “Child-Computer Interaction in Timesof a Pandemic,”
International Journal of Child-Computer Interaction , p.100201, 2020.[4] W. Barendregt, O. Torgersson, E. Eriksson, and P. B¨orjesson,“Intermediate-Level Knowledge in Child-Computer Interaction: A Callfor Action,” in
Proc. Conference on Interaction Design and Children ,2017.[5] D. Tsvyatkova and C. Storni, “A Review of Selected Methods,Techniques and Tools in Child-Computer Interaction (CCI) Devel-oped/Adapted to Support Children’s Involvement in Technology Devel-opment,”
International Journal of Child-Computer Interaction , vol. 22,p. 100148, 2019.[6] J. Piaget and B. Inhelder,
The Psychology of the Child . Basic books,2008.[7] R.-D. Vatavu, L. Anthony, and Q. Brown, “Child or Adult? InferringSmartphone Users’ Age Group from Touch Measurements Alone,” in
Proc. Conference on Human-Computer Interaction , 2015.[8] L. Anthony, K. A. Stofer, A. Luc, and J. O. Wobbrock, “Gestures byChildren and Adults on Touch Tables and Touch Walls in a Public Sci-ence Center,” in
Proc. International Conference on Interaction Designand Children , 2016.[9] R. Vera-Rodriguez, R. Tolosana, J. Hernandez-Ortega, A. Acien,A. Morales, J. Fierrez, and J. Ortega-Garcia, “Modeling the Complexityof Signature and Touch-Screen Biometrics using the LognormalityPrinciple,” in
The Lognormality Principle and its Applications , R. Pla-mondon, A. Marcelli, and M. A. Ferrer, Eds. World Scientific, 2020.[10] A. Acien, A. Morales, J. Fierrez, R. V. Rodriguez, and J. Hernandez-Ortega, “Active Detection of Age Groups Based on Touch Interaction,”
IET Biometrics , vol. 8, no. 1, pp. 101–108, 2019.[11] L. McKnight and B. Cassidy, “Children’s Interaction with Mobile Touch-Screen Devices: Experiences and Guidelines for Design,”
InternationalJournal of Mobile Human Computer Interaction , vol. 2, no. 2, pp. 1–18,2010.[12] L. Anthony, Q. Brown, B. Tate, J. Nias, R. Brewer, and G. Irwin,“Designing Smarter Touch-Based Interfaces for Educational Contexts,”
Personal and Ubiquitous Computing , vol. 18, no. 6, pp. 1471–1483,2014.[13] R.-D. Vatavu, G. Cramariuc, and D. M. Schipor, “Touch Interaction forChildren Aged 3 to 6 Years: Experimental Findings and Relationshipto Motor Skills,”
International Journal of Human-Computer Studies ,vol. 74, pp. 54–76, 2015.[14] L. C. Lanna and M. G. Oro, “Touch Gesture Performed by Children Un-der 3 Years Old when Drawing and Coloring on a Tablet,”
InternationalJournal of Human-Computer Studies , vol. 124, pp. 1–12, 2019.[15] M. Faundez-Zanuy, J. Fierrez, M. A. Ferrer, M. Diaz, R. Tolosana,and R. Plamondon, “Handwriting Biometrics: Applications and FutureTrends in e-Security and e-Health,”
Cognitive Computation , vol. 12,no. 5, pp. 940–953, 2020.[16] J. Hernandez-Ortega, R. Daza, A. Morales, J. Fierrez, and R. Tolosana,“Heart Rate Estimation from Face Videos for Student Assessment:Experiments on edBB,” in
IEEE Conference on Computers, Software,and Applications , 2020.[17] V. Nacher, J. Jaen, E. Navarro, A. Catala, and P. Gonz´alez, “Multi-Touch Gestures for Pre-Kindergarten Children,”
International Journalof Human-Computer Studies , vol. 73, pp. 37–51, 2015.[18] A. Hiniker, K. Sobel, S. R. Hong, H. Suh, I. Irish, D. Kim, and J. A.Kientz, “Touchscreen Prompts for Preschoolers: Designing Developmen-tally Appropriate Techniques for Teaching Young Children to PerformGestures,” in
Proc. International Conference on Interaction Design andChildren , 2015.[19] N. A. A. Aziz, “Children’s Interaction with Tablet Applications: Gesturesand Interface Design,”
Children , vol. 2, no. 3, pp. 447–450, 2013.[20] C. R´emi, J. Vaillant, R. Plamondon, L. Prevost, and T. Duval, “Exploringthe Kinematic Dimensions of Kindergarten Children’s Scribbles,” in
Proc. Conference of the International Graphonomics Society , 2015. [21] N. Behnamnia, A. Kamsin, M. A. B. Ismail, and A. Hayati, “TheEffective Components of Creativity in Digital Game-Based Learningamong Young Children: A Case Study,”
Children and Youth ServicesReview , vol. 116, p. 105227, 2020.[22] N. H. Hussain, T. S. M. Tengku Wook, S. F. Mat Noor, and H. Mohamed,“Children’s Interaction Ability Towards Multi-Touch Gestures,”
Inter-national Journal on Advanced Science, Engineering and InformationTechnology , vol. 6, no. 6, pp. 875–881, 2016.[23] B. Huber, J. Tarasuik, M. N. Antoniou, C. Garrett, S. J. Bowe, J. Kauf-man, and S. B. Team, “Young Children’s Transfer of Learning froma Touchscreen Device,”
Computers in Human Behavior , vol. 56, pp.56–64, 2016.[24] J. Woodward, A. Shaw, A. Luc, B. Craig, J. Das, P. Hall Jr, A. Holla,G. Irwin, D. Sikich, Q. Brown et al. , “Characterizing How InterfaceComplexity Affects Children’s Touchscreen Interactions,” in
Proc. Con-ference on Human Factors in Computing Systems , 2016.[25] V. Nacher, D. C´aliz, J. Jaen, and L. Mart´ınez, “Examining the Usabilityof Touch Screen Gestures for Children with Down Syndrome,”
Inter-acting with Computers , vol. 30, no. 3, pp. 258–272, 2018.[26] N. Tabatabaey-Mashadi, R. Sudirman, R. M. Guest, and P. I. Khalid,“Analyses of Pupils’ Polygonal Shape Drawing Strategy with Respect toHandwriting Performance,”
Pattern Analysis and Applications , vol. 18,no. 3, pp. 571–586, 2015.[27] P. Laniel, N. Faci, R. Plamondon, M. H. Beauchamp, and B. Gauthier,“Kinematic Analysis of Fast Pen Strokes in Children with ADHD,”
Applied Neuropsychology: Child , vol. 9, no. 2, pp. 125–140, 2020.[28] C. O’Reilly and R. Plamondon, “Development of a Sigma-LognormalRepresentation for On-Line Signatures,”
Pattern Recognition , vol. 42,no. 12, pp. 3324–3337, 2009.[29] J. F. Keller, J. W. Croake, and C. Riesenman, “Relationships amongHandedness, Intelligence, Sex, and Reading Achievement of School AgeChildren,”
Perceptual and Motor Skills , vol. 37, no. 1, pp. 159–162,1973.[30] S. Fahn, E. Tolosa, and C. Mar´ın, “Clinical Rating Scale for Tremor,”
Parkinson’s Disease and Movement Disorders , vol. 2, pp. 271–280,1993.[31] F. Alonso-Fernandez, J. Fierrez-Aguilar, and J. Ortega-Garcia, “SensorInteroperability and Fusion in Signature Verification: A Case StudyUsing Tablet PC,” in
Proc. Int. Workshop on Biometric RecognitionSystems , 2005.[32] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Benchmarking Desktop and Mobile Handwriting across COTSDevices: the e-BioSign Biometric Database,”
PLOS ONE , 2017.[33] S. AL-Showarah, N. AL-Jawad, and H. Sellahewa, “User-Age Classifi-cation Using Touch Gestures on Smartphones,”
International Journal ofMultidisciplinary Studies , vol. 2, no. 1, pp. 1–11, 2015.[34] M. Martinez-Diaz, J. Fierrez, J. Galbally, and J. Ortega-Garcia, “TowardsMobile Authentication Using Dynamic Signature Verification: UsefulFeatures and Performance Evaluation,” in
Proc. Int. Conf. on PatternRecognition , 2008.[35] M. Martinez-Diaz, J. Fierrez, and S. Hangai,
Signature Features . S.Z.Li and A. Jain (Eds.),
Encyclopedia of Biometrics , Springer, 2015.[36] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, and J. Ortega-Garcia,“Feature-Based Dynamic Signature Verification under Forensic Scenar-ios,” in
Proc. International Workshop on Biometrics and Forensics , 2015.[37] R. Tolosana, R. Vera-Rodriguez, J. Ortega-Garcia, and J. Fierrez, “Pre-processing and Feature Selection for Improved Sensor Interoperabilityin Online Biometric Signature Verification,”
IEEE Access , vol. 3, pp.478 – 489, 2015.[38] J. Galbally, J. Fierrez, M. R. Freire, and J. Ortega-Garcia, “FeatureSelection based on Genetic Algorithms for On-Line Signature Verifi-cation,” in
Proc. IEEE Workshop on Automatic Identification AdvancedTechnologies, AutoID , 2007.[39] D. E. Goldberg,
Genetic Algorithms in Search, Optimization and Ma-chine Learning . Addison-Wesley, 1989.[40] T. Hastie, S. Rosset, J. Zhu, and H. Zou, “Multi-Class AdaBoost,”
Statistics and Its Interface , vol. 2, no. 3, pp. 349–360, 2009.[41] R. Tolosana, P. Delgado-Santos, A. Perez-Uribe, R. Vera-Rodriguez,J. Fierrez, and A. Morales, “DeepWriteSYN: On-Line HandwritingSynthesis via Deep Short-Term Representations,” in
Proc. 35th AAAIConference on Artificial Intelligence , 2021.[42] J. Fierrez-Aguilar, D. Garcia-Romero, J. Ortega-Garcia, and J. Gonzalez-Rodriguez, “Adapted User-Dependent Multimodal Biometric Authen-tication Exploiting General Information,”
Pattern Recognition Letters ,vol. 26, no. 16, pp. 2628–2639, 2005. [43] C. T. Fuentes, S. H. Mostofsky, and A. J. Bastian, “Children with AutismShow Specific Handwriting Impairments,” Neurology , vol. 73, no. 19,pp. 1532–1537, 2009.[44] J. Fierrez, A. Morales, R. Vera-Rodriguez, and D. Camacho, “MultipleClassifiers in Biometrics. Part 1: Fundamentals and Review,”
InformationFusion , vol. 44, pp. 57–64, 2018.[45] R. Castrillon, A. Acien, J. R. Orozco-Arroyave, A. Morales, J. F.Vargas, R. Vera-Rodriguez, J. Fierrez, J. Ortega-Garcia, and A. Vil-legas, “Characterization of the Handwriting Skills as a Biomarker forParkinson Disease,” in
IEEE Int. Conf. on Automatic Face and GestureRecognition , 2019.
Ruben Tolosana received the M.Sc. degree inTelecommunication Engineering, and his Ph.D. de-gree in Computer and Telecommunication Engi-neering, from Universidad Autonoma de Madrid, in2014 and 2019, respectively. In 2014, he joined theBiometrics and Data Pattern Analytics - BiDA Labat the Universidad Autonoma de Madrid, where he iscurrently collaborating as a PostDoctoral researcher.Since then, Ruben has been granted with severalawards such as the FPU research fellowship fromSpanish MECD (2015), and the European Biomet-rics Industry Award (2018). His research interests are mainly focused on signaland image processing, pattern recognition, and machine learning, particularlyin the areas of DeepFakes, HCI, and Biometrics. He is author of severalpublications and also collaborates as a reviewer in high-impact conferences(WACV, ICPR, ICDAR, IJCB, etc.) and journals (IEEE TPAMI, TCYB, TIFS,TIP, ACM CSUR, etc.). Finally, he is also actively involved in several Nationaland European projects.
Juan Carlos Ruiz-Garcia received his B.Sc. degreein Computer Science Engineering from the Univer-sidad de Granada, Spain, in 2019. He is currentlystudying the M.Sc. degree at the Universidad Au-tonoma de Madrid. In addition, in April 2020, hejoined the Biometrics and Data Pattern Analytics- BiDA Lab as research assistant at the same uni-versity. His research focuses on the use of machinelearning for e-Learning and e-Health.
Ruben Vera-Rodriguez received the M.Sc. degreein telecommunications engineering from Universi-dad de Sevilla, Spain, in 2006, and the Ph.D. de-gree in electrical and electronic engineering fromSwansea University, U.K., in 2010. Since 2010, hehas been affiliated with the Biometric RecognitionGroup, Universidad Autonoma de Madrid, Spain,where he is currently an Associate Professor since2018. His research interests include signal and imageprocessing, pattern recognition, HCI, and biometrics,with emphasis on signature, face, gait verificationand forensic applications of biometrics. Ruben has published over 100scientific articles published in international journals and conferences. Heis actively involved in several National and European projects focused onbiometrics. Ruben has been Program Chair for the IEEE 51st InternationalCarnahan Conference on Security and Technology (ICCST) in 2017; the23rd Iberoamerican Congress on Pattern Recognition (CIARP 2018) in 2018;and the International Conference on Biometric Engineering and Applications(ICBEA 2019) in 2019.
Jaime Herreros-Rodriguez (JHR) received thedegree in Medicine in 2006 from UniversidadAut´onoma de Madrid, the tittle of neurologist in2010 and he was awarded the title of Doctor inMedicine from the Universidad Complutense deMadrid (2019) with a distinction Cum Laude givenunanimously for his doctoral thesis on migraine. Heis also author of several publications in migraineand parkinsonism. He has collaborated with differentresearch projects related to many neurological dis-orders, mainly Alzheimer and Parkinson’s disease.JHR is a neurology and neurosurgery proffesor in CTO group, since 2008.
Sergio Romero-Tapiador received the BSc in Com-puter Science and Engineering in 2020 from Univer-sidad Autonoma de Madrid. Since September 2019,he is a member of the Biometrics and Data PatternAnalytics - BiDA Lab at the Universidad Autonomade Madrid, where he is currently collaborating asan assistant researcher. Among the research activi-ties, he is mainly working on Pattern Recognitionand Machine Learning, particularly in the area ofDeepFakes.
Aythami Morales received his M.Sc. degreein Telecommunication Engineering in 2006 fromULPGC. He received his Ph.D degree from ULPGCin 2011. He performs his research works in theBiDA Lab at Universidad Aut´onoma de Madrid,where he is currently an Associate Professor. Hehas performed research stays at the Biometric Re-search Laboratory at Michigan State University, theBiometric Research Center at Hong Kong Polytech-nic University, the Biometric System Laboratory atUniversity of Bologna and Schepens Eye ResearchInstitute. His research interests include pattern recognition, machine learning,trustworthy AI, and biometrics. He is author of more than 100 scientificarticles published in international journals and conferences, and 4 patents. Hehas received awards from ULPGC, La Caja de Canarias, SPEGC, and COIT.He has participated in several National and European projects in collaborationwith other universities and private entities such as ULPGC, UPM, EUPMt,Accenture, Uni´on Fenosa, Soluziona, BBVA. . .