Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katherine Chiluiza is active.

Publication


Featured researches published by Katherine Chiluiza.


international conference on multimodal interfaces | 2013

Expertise estimation based on simple multimodal features

Xavier Ochoa; Katherine Chiluiza; Gonzalo Gabriel Méndez; Gonzalo Luzardo; Bruno Guamán; James Castells

Multimodal Learning Analytics is a field that studies how to process learning data from dissimilar sources in order to automatically find useful information to give feedback to the learning process. This work processes video, audio and pen strokes information included in the Math Data Corpus, a set of multimodal resources provided to the participants of the Second International Workshop on Multimodal Learning Analytics. The result of this processing is a set of simple features that could discriminate between experts and non-experts in groups of students solving mathematical problems. The main finding is that several of those simple features, namely the percentage of time that the students use the calculator, the speed at which the student writes or draws and the percentage of time that the student mentions numbers or mathematical terms, are good discriminators be- tween experts and non-experts students. Precision levels of 63% are obtained for individual problems and up to 80% when full sessions (aggregation of 16 problems) are analyzed. While the results are specific for the recorded settings, the methodology used to obtain and analyze the features could be used to create discriminations models for other contexts.


Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge | 2014

Presentation Skills Estimation Based on Video and Kinect Data Analysis

Vanessa Echeverria; Allan Avendaño; Katherine Chiluiza; Aníbal Vásquez; Xavier Ochoa

This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the performance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.


learning analytics and knowledge | 2014

Techniques for data-driven curriculum analysis

Gonzalo Gabriel Méndez; Xavier Ochoa; Katherine Chiluiza

One of the key promises of Learning Analytics research is to create tools that could help educational institutions to gain a better insight of the inner workings of their programs, in order to tune or correct them. This work presents a set of simple techniques that applied to readily available historical academic data could provide such insights. The techniques described are real course difficulty estimation, dependance estimation, curriculum coherence, dropout paths and load/performance graph. The description of these techniques is accompanied by its application to real academic data from a Computer Science program. The results of the analysis are used to obtain recommendations for curriculum re-design.


Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge | 2014

Estimation of Presentations Skills Based on Slides and Audio Features

Gonzalo Luzardo; Bruno Guamán; Katherine Chiluiza; Jaime Castells; Xavier Ochoa

This paper proposes a simple estimation of the quality of student oral presentations. It is based on the study and analysis of features extracted from the audio and digital slides of 448 presentations. The main goal of this work is to automatically predict the values assigned by professors to different criteria in a presentation evaluation rubric. Machine Learning methods were used to create several models that classify students in two clusters: high and low performers. The models created from slide features were accurate up to 65%. The most relevant features for the slide-base models were: number of words, images, and tables, and the maximum font size. The audio-based models reached up to 69% of accuracy, with pitch and filled pauses related features being the most significant. The relatively high degrees of accuracy obtained with these very simple features encourage the development of automatic estimation tools for improving presentation skills.


international conference on multimodal interfaces | 2015

Multimodal Selfies: Designing a Multimodal Recording Device for Students in Traditional Classrooms

Federico Domínguez; Katherine Chiluiza; Vanessa Echeverria; Xavier Ochoa

The traditional recording of student interaction in classrooms has raised privacy concerns in both students and academics. However, the same students are happy to share their daily lives through social media. Perception of data ownership is the key factor in this paradox. This article proposes the design of a personal Multimodal Recording Device (MRD) that could capture the actions of its owner during lectures. The MRD would be able to capture close-range video, audio, writing, and other environmental signals. Differently from traditional centralized recording systems, students would have control over their own recorded data. They could decide to share their information in exchange of access to the recordings of the instructor, notes form their classmates, and analysis of, for example, their attention performance. By sharing their data, students participate in the co-creation of enhanced and synchronized course notes that will benefit all the participating students. This work presents details about how such a device could be build from available components. This work also discusses and evaluates the design of such device, including its foreseeable costs, scalability, flexibility, intrusiveness and recording quality.


international conference on multimodal interfaces | 2015

2015 Multimodal Learning and Analytics Grand Challenge

Marcelo Worsley; Katherine Chiluiza; Joseph F. Grafsgaard; Xavier Ochoa

Multimodality is an integral part of teaching and learning. Over the past few decades researchers have been designing, creating and analyzing novel environments that enable students to experience and demonstrate learning through a variety of modalities. The recent availability of low cost multimodal sensors, advances in artificial intelligence and improved techniques for large scale data analysis have enabled researchers and practitioners to push the boundaries on multimodal learning and multimodal learning analytics. In an effort to continue these developments, the 2015 Multimodal Learning and Analytics Grand Challenge includes a combined focus on new techniques to capture multimodal learning data, as well as the development of rich, multimodal learning applications.


2016 IEEE Ecuador Technical Chapters Meeting (ETCM) | 2016

Fingertip detection approach on depth image sequences for interactive projection system

Arturo Cadena; Rubén Carvajal; Bruno Guamán; Roger Granda; Enrique Pelaez; Katherine Chiluiza

This study presents a vision-based approach for fingertip tracking on multi-touch tabletop which combines infrared and depth image processing. This approach intends to tackle two main issues on tabletop interaction: improve the performance for real-time applications and increase fingertip detection accuracy. A prototype using this fingertip tracking method was implemented with a depth and infrared camera. This approach processes the users arm, hands and fingertips images using depth-space constraints, as well as clustering. Fingertip positions are accurately corrected using additional infrared information. Quantitative results show high accuracy of fingertip detection, with lower error rates compared to previous studies. Also, increased capabilities for real-time multi-user interaction are further demonstrated through a set of response time tests.


international conference on multimodal interfaces | 2014

MLA'14: Third Multimodal Learning Analytics Workshop and Grand Challenges

Xavier Ochoa; Marcelo Worsley; Katherine Chiluiza; Saturnino Luz


Journal of learning Analytics | 2014

Curricular Design Analysis: A Data-Driven Perspective

Gonzalo Gabriel Méndez; Xavier Ochoa; Katherine Chiluiza; Bram De Wever


2015 Asia-Pacific Conference on Computer Aided System Engineering | 2015

Supporting the Assessment of Collaborative Design Activities in Multi-tabletop Classrooms

Roger Granda; Vanessa Echeverria; Katherine Chiluiza; Marisol Wong-Villacres

Collaboration


Dive into the Katherine Chiluiza's collaboration.

Top Co-Authors

Avatar

Xavier Ochoa

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Vanessa Echeverria

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Bruno Guamán

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Roger Granda

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Gonzalo Luzardo

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Jaime Castells

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Marisol Wong-Villacres

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Falcones

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar

Gonzalo Gabriel Méndez

Escuela Superior Politecnica del Litoral

View shared research outputs
Researchain Logo
Decentralizing Knowledge