Personalized Education in the AI Era: What to Expect Next?
Setareh Maghsudi, Andrew Lan, Jie Xu, Mihaela van der Schaar
11 Personalized Education in the AI Era:What to Expect Next?
Setareh Maghsudi, Andrew Lan, Jie Xu, and Mihaela van der Schaar
Abstract
The objective of personalized learning is to design an effective knowledge acquisition trackthat matches the learner’s strengths and bypasses her weaknesses to ultimately meet her desiredgoal. This concept emerged several years ago and is being adopted by a rapidly-growing number ofeducational institutions around the globe. In recent years, the boost of artificial intelligence (AI)and machine learning (ML), together with the advances in big data analysis, has unfolded novelperspectives to enhance personalized education in numerous dimensions. By taking advantage ofAI/ML methods, the educational platform precisely acquires the student’s characteristics. Thisis done, in part, by observing the past experiences as well as analyzing the available big datathrough exploring the learners’ features and similarities. It can, for example, recommend the mostappropriate content among numerous accessible ones, advise a well-designed long-term curriculum,connect appropriate learners by suggestion, accurate performance evaluation, and the like. Still,several aspects of AI-based personalized education remain unexplored. These include, amongothers, compensating for the adverse effects of the absence of peers, creating and maintainingmotivations for learning, increasing the diversity, removing the biases induced by the data andalgorithms, and the like. In this paper, while providing a brief review of state-of-the-art research, weinvestigate the challenges of AI/ML-based personalized education, and discuss potential solutions.
Keywords : Artificial intelligence, Learning platform, Machine learning, Personalized education
I. Introduction
The last decade has witnessed an explosion in the number of web-based learning systems due tothe increasing demand in higher-level education, the limited number of teaching personnel, advancesin information technology and artificial intelligence, and, more recently, COVID-19. In the past fewyears, to enhance the conventional classrooms, to bridge the constraints of time and distances, and
S. Maghsudi is with the Department of Computer Science, University of T¨ubingen, Germany (email:[email protected]). A. Lan is with the College of Information and Computer Sciences, Universityof Massachusetts Amherst, MA, USA (email: [email protected]). J. Xu is with the Department of Electricaland Computer Engineering, University of Miami, FL, USA (email: [email protected]). M. van der Schaar is with theFaculty of Mathematics, University of Cambridge, UK (email: [email protected]). a r X i v : . [ c s . C Y ] J a n Learners Instructors
Policy Makers Content Designers
Personalized learning experiences
Effectiveness validation Teachingsupport tools
Usabilityfeedback
Population-level insightsFairness, privacy,transparencyregulations
Learning contentrepositoryDesign and quality feedback
Fig. 1. The baseline ecosystem of AI-empowered personalized education. to improve fairness by making high-quality education accessible, most universities have integratedMassive Open Online Course (MOOC) platforms such as the edX consortium in their educationsystems. Also, several schools have added online labs to their structures, where students, especiallythose who cannot access physical labs, can perform experiments. Besides, there has been significantgrowth in the development of other online educational tools that simplify learning. These include,for example, the software for text summarization in different domains, also to produce questionsand tests, followed by evaluation, which can be great assistance not only to students but alsoto teachers. Several advantages of these systems over traditional classroom teaching are: (i) Theyprovide flexibility to the student in choosing what to learn and when to learn; (ii) They do notrequire the presence of an interactive human teacher; (iii) There are no limitations in terms of thenumber of students who can participate in the course.
Figure 1 shows the baseline ecosystem ofonline personalized education, including all the stakeholders, together with the crucial factors andperformance metrics.However, the currently available online teaching platforms have significant limitations. To a largeextent, personalized education has been mainly diminished to a specific type of ’recommendersystem’, although its potential goes far beyond advising a series of lectures on an online platform thatmight be interesting to a specific user. One fundamental difference between existing recommendersystems and personalized education is the optimization objective: The former focuses on some form ofuser engagement to maximize profit, which is system-centric and relatively easy to quantify, whereasthe latter focuses on some form of learning outcomes, which is student-centric and hard to define.ML/AI-enabled education is a response of great potential to the current shortcomings. It createsa new and more flexible learning technology genre that adapts to student learning and allocatesresources as obliged. It takes advantage of the strengths of both online tools and individual tutoring.
ML/AI-basedDecision-Making
Sequence of personalized learning materialsPersonalized incen vesIndividual recommenda on for lifelong learningForma on of learning networkHistorical educa onal big dataIndividual featuresBatch of learning materials
Fig. 2. The basic concept of AI-powered personalized education.Fig. 3. A list of (some of) the topics in personalized education, organized by three different aspects: technical,personal, and social. We focus on six of them in this paper.
As such, AI-enabled personalized education promises to yield many of the benefits of one-on-oneinstruction at a per-student cost similar to large university lecture classes. The system applies toboth online courses and courses that have a hybrid of classroom and on-line instruction. As shownin
Figure 2 , ML/AI-enabled education comprises of a large set of decision-making strategies thatcollectively map the available data together with the individual features to a variety of personalizededucational materials and recommendations.Data can be collected on performance in both traditional assignments (problem sets, computerprograms, laboratory) as well as online exercises and tests. It includes built-in assessment tools as anessential part of its optimization of lesson sequences. As such, it supports the educational communityin developing new teaching modalities in a broad range of disciplines. However, despite intensiveresearch efforts in this decade, a variety of aspects of personalized education remain unexplored,including both dark- and bright sides. In this paper, we discuss six core topics, review existing work,outline their limitations, and propose future research directions; see
Figure 3 for an overview.
When discussing any form of education, quality is an inevitable keyword. The quality of educationdepends largely on the quality of the available learning content and the quality of the personalizedrecommendations that guide each learner to the most suitable learning content. So far, the re-searchers have studied the production of learning content, from developing AI-driven smart learningcontent such as intelligent, interactive textbooks and game-based learning platforms to automaticallygenerating learning content from the wild. Reference [1], for example, develops a sentence deletionmethod for text simplification. Besides, in [2], the authors investigate the effectiveness of discoursein multimedia to extract the knowledge from the textbooks. Moreover, a large body of the existingwork investigates the recommendation of both macro-level and micro-level learning content, includingcourses in learners’ degree plans as well as specific remedial content such as lecture notes, videos,and practice problems. For example, in [3], the authors take advantage of a multi-armed banditframework to optimize the selection of learning resources and questions to satisfy the needs of eachindividual student. Moreover, Reference [4] develops an e-learning recommender system frameworkbased on two concepts: peer learning and social learning, which encourage students to cooperate andlearn jointly. Despite great efforts, there remain several challenges to address. These include contentrecommendation at heterogeneous levels, the recommendation of a bundle of connected contentsfollowed by performance evaluation, and the Pareto-optimization of conflicting objectives in thecontent recommendation. We discuss these progress and the future steps in
Section II .Historically, education is tightly coupled with evaluation. In personalized education, assessmentand evaluation concern both the learner’s performance and the effectiveness of the intelligent learningplatform. Early approaches for learner’s assessment such as the ’classical testing theory’ (CTT) usesummaries of graded standardized tests. Recent approaches include ’item response theory’ (IRT)models that enable the estimation of latent knowledge mastery levels and knowledge tracing modelsthat pursue the evolution of a learner’s knowledge. In [5], the authors compare the CTT and IRT.Methods such as ’computerized adaptive testing’ improve the efficiency of assessments. The currentapproaches to evaluate the learning platforms use rigorous experiments, often large-scale randomizedcontrolled trials. In this area, open problems include the prediction of learners’ future performance,which enables providing better recommendations and more accurate feedback. This is referred toas the knowledge tracing (KT) problem, for which several methods are developed in the past fewdecades. As an example among many others, [6] discusses a Bayesian framework for KT. Anotherchallenge is to reduce the information loss while grading the arrived input from the learner, byaccurate interpretation of knowledge level based on the test design. We elaborate and address suchchallenges in
Section III .The huge advances in science, technology, and healthcare have changed the working life of humans.Individuals have way more alternatives to choose a job, they tend to change their job more frequently than before, are more open to mobility, and the career spans a long period of life. As such, continuingeducation, which aims at advancing one’s educational process, as well as lifelong learning, i.e.,pursuing additional professional qualifications, are important components of educational policy inthe world. Implementing these two concepts successfully has a significant impact on social welfareby developing new skills that enhance personal- and professional life. During the past decade, AI-/ML-based personalized education has been under intensive investigation from several perspectives;nonetheless, the aforementioned aspects are largely neglected. Indeed, personalized education shallaccompany the learner throughout her life, which can be difficult and costly to implement. Otherchallenges include lack of appropriate data, potentially long delay to feedback, high diversity, andfast dynamics in the environment. For example, in [7], the authors design a new genre of educa-tional technology-personal computer systems- that support learning from any location throughouta lifetime. Another research direction is to enable learning system to learn continuously. Reference[8], e.g., investigates the ability of neural networks to enable life-long learning. We elaborate moreon this topic in
Section IV .Similar to any other task, humans require motivation for learning. Generally, incentives for learningcan be defined as an inducement or supplemental reward that serves as a motivational device forintended learning [9]. Presumably, the most conventional models of incentive are the ’grade’ and’certificate’, which are implemented as a part of learning platforms to motivate the students. Thestrength of such motivation depends on the validity and acceptance of such certificates by differentauthorities such as employers. Nonetheless, employing AI methods enables for incentive design farbeyond handing a certificate. This includes, for example, monetary rewards in the form of bonuses foronline learning materials. The incentives can be induced also by soft-methods such as gamificationbased on the learner’s characters to promote continuous learning, or adapting the features of thelearning environment based on the learner’s traits to engage her in the learning process as faras possible. In [10], for example, the authors investigate the effects of gamification on students’motivation from several perspectives. Besides, [11] discusses several factors on motivation in onlinelearning together with their relative salience. We discuss such challenges and methods in
SectionV .Education is social and learners can extremely benefit from their peers. Therefore, it is urgent todevelop effective ways to build networks that serve as a conduit of knowledge for learners to interactwith each other. In the current form, personalized education suffers from a lack of student-studentand student-teacher connections and interactions, which have an unquestionably positive impact onlearning through discussions, joint efforts, and brainstorming. In [12], the authors study buildingand sustaining community in asynchronous learning networks, i.e., when the learners are physicallyseparated. Moreover, Reference [13] investigates and compares the influence of such communities from both students’- and teachers’ perspectives. Despite the past research efforts, we believe thatcapitalizing on AI and ML methods, online platforms have more to offer, especially, for building theknowledge- and expertise networks that facilitate the assimilation and dissemination of knowledge,and consequently, by enabling close interactions (in terms of mentorship, friendship, coworkers, andthe like), creating knowledge. Personalized education platforms can promote autonomous networkformation by encouraging learners to interact. Moreover, the platforms can establish links amongthose learners that satisfy some similarity conditions and hence can be useful to each other forcooperation, inspiration, and motivation. We elaborate on these issues in
Section VI .In many different ways, education affects the well-being of humans, and thereby the society,both in the short-term and long-term. As such, fairness is a highly important aspect of education,regardless of being in conventional classrooms form or in modern platforms that can personalize thelearning experience. Despite this great importance, personalized education, similar to its traditionalcounterpart, might result in- and strengthen inequality. This arises, for example, due to unequalaccess to the learning platforms, biases in training data, inaccuracy in algorithm design, and thelike. Indeed, existing research shows that some subgroups of students, mainly those privileged alsoin conventional education forms, would profit from personalized education more than their peers. Toaddress such issue more religiously, there has been intensive effort to develop appropriate fairnessmodels [14]. Moreover, several research works such as [15] study the fairness of predictive algorithmsin educational settings. Another crucial issue is ’diversity’. Today, it is well-established that diversitypromotes innovation and efficiency in the working place. Nonetheless, given the social-responsibility ofeducation, only recruiting diverse talents does not suffice. AI-based personalized education platformcan be a boost to diversify the education environment, for example, by rewarding collaborativelearning in diverse networks. We discuss these topics in
Section VII . II. Content Production and Recommendation
The quality of education ultimately depends on the quality of the learning content. Creating newcontent requires the wisdom of human content designers and educational experts; to date, AIs havenot shown the capability of creating learning content on their own. However, they still have plenty tooffer in content production by automating mundane jobs and helping humans in tasks where humaninput is necessary. Specifically, the role of AI should be to i) take away repetitive tasks that can beautomated and ii) assist humans by providing feedback extracted from data during the process ofcontent production in a human-in-the-loop manner. There are ample future research directions incontent production; we list a few below. • Content summarization and question generation:
In many educational domains, knowl-edge is factual. For example, in History, one often needs to remember specific detailed facts abouthistorical events. Even in scientific domains such as Biology, there is also factual knowledge such as the size and life span of an animal. In this case, there are many natural language processing(NLP)-based tools that can be used for content production. For example, text summarizationtools can sort through long, sometimes redundant textbook sections and extract key factsfor remedial studies. This is not only helpful but also sometimes crucial to certain learnergroups, such as those with learning disabilities. Moreover, automatic question generation caneffectively produce high-quality factual assessment questions that have short, textual answers[16]. An example of automated question generation is shown in
Table I ; we reversed a longshort-term memory (LSTM) network-driven question answering pipeline trained on commonquestion answering datasets, turned it into a question generation pipeline, and applied it totextbooks. Human experts have indicated that the quality of generated questions is higher thanthat generated by other methods. See [16] for details. • Multi-modal content understanding:
Many educational domains involve multi-modal learn-ing content, such as text, formulas, figures, and diagrams. When a learner fails to answer anassessment question correctly, personalized education systems need to automatically retrieverelevant content to help the learner resolve their confusion (by retrieving examples and expla-nations) or give the learner more practice opportunities (by retrieving assessment questions).Retrieving content within the same modal is relatively easy; for example, when a learneranswers a textual question incorrectly, it is possible to use information retrieval methods toextract relevant textbook chunks or lecture slides. However, when the most helpful contentis in another modality, for example, when a Venn diagram is the most effective at helping alearner clear up a misconception in a probability question involving text and mathematicalformulas, it is hard to retrieve the diagram. Therefore, more work needs to be done when thedomain includes multi-modal content; To understand these content modalities and use them forcontent production, we need to learn universal representations across all modalities, possiblyusing embedding approaches to map multiple modalities into a shared vector space [17]. • Human-in-the-loop content design:
Even for humans, learning content is not created inone shot; just like textbooks have different editions, learning content is frequently edited andupdated over time. Therefore, during this multi-step process, we can use AIs to act as (possiblyeven interactive) assistants to content designers. Duties that can be assigned to AIs includei) Analyzing learners’ data to identify the areas of priority for new content and assessmentquestions that need to be improved (see Section III for discussions on how existing learnerassessment models can also provide information on question quality); ii) Providing drafts ofinstructor responses and perform automated checking of human-generated content using NLPtools; iii) Using crowdsourcing to put the learning content together by soliciting on-demandfeedback [18]. The last task is especially important in online educational settings, where learning
Context (Biology) : On each chromosome, there are thousands of genes that are responsible fordetermining the genotype and phenotype of the individual. A gene is defined as a sequence of DNAthat codes for a functional product. The human haploid genome contains and has between 20,000 and 25,000 functional genes.
How many base pairs are on the human genome? How many functional genes are on the human hap-loid genome?
TABLE I
Example of two automatically generated assessment questions for two different answers with thesame input context from a textbook. The answers are underlined and marked with differentcolors in the input context. follows during frequent exchanges between learners and human instructors and assistants [19].Even with high-quality learning content, the presentation, i.e, the personalized recommendationof the right learning content to the right learner at the right time is crucial to optimize the learningoutcomes. Fortunately, this is an area where AIs can excel at: By automatically deploying recom-mendations and analyzing the data of learners’ performance, they can quantify the effect of learningcontent on certain learners in terms of specific learning outcomes to detect the most effective ones.On the contrary, humans, even educational experts in the past, use theoretical models of learningand do not fully take advantage of this data. Among several directions for future research in thisarea, we discuss a few in the following. • Recommendations at the microscopic and macroscopic levels:
Learning content isorganized at multiple levels, down to individual paragraphs and assessment questions, and upto courses and textbooks that organize several pieces of learning content together. Therefore,we need to study content recommendations at multiple levels: (i) microscopic level such asindividual questions and lecture video suggestions [20]; and (ii) macroscopic level such as courserecommendation, especially for learners taking massive open online courses (MOOCs) [21]. • Efficient experimentation and synthetic learner models:
Traditionally, the fields of learn-ing science and education have relied on rigorous A/B testing to validate the educational impactof learning content, usually in terms of its ability to improve learning outcomes for learners inthe experimental group over those in the control group. However, this approach leads to longexperimental cycles since (i) one can typically validate only one learning content at a time,and (ii) metrics such as long-term learning outcomes naturally require long experimental cycles.Therefore, it is imperative to search for novel tools that enable rapid experimentation. Possiblesolutions include using Bayesian optimization to test multiple contents simultaneously [22], orutilizing reinforcement learning (RL) as more and more learners use a piece of learning content.In the past, using RL to learn instructional policies (content recommendation can be viewed as a form of the instructional policy) has been limited due to the lack of large-scale real learnerdata; however, recent approaches have looked at using data- or cognitive theory-driven syntheticlearner models to simulate learner data [23]. • Conflicting objectives:
There is no unified objective in personalized learning since learningoutcomes itself is defined at multiple timescales. The optimal action may differ across differentobjectives. For example, the learning content used in a practice session that maximizes a learner’sperformance on the midterm exam tomorrow may differ from the one that maximizes theiroverall course grade, which may differ from the one that maximizes their chance of getting aspecific job after graduation. Therefore, We need to develop personalization algorithms thatcan balance multiple objectives and even resolve potential conflicts among these objectives. Wealso need to understand how these objectives interact with each other; for example, what skillstaught in courses and schools carry over after graduation, which is a key issue in lifelong learning(discussed in detail in Section IV).
Figure 4 shows the interplay between different elements, such as context, prediction, feedback, andthe like, to optimize the course recommendation. It is worth noting that the approaches describedabove are generic in the sense that they have wide applicability to different educational areasincluding signal processing, possibly with minor domain-dependent adaptations. As an example,in [24], the authors apply several of the aforementioned ideas to develop eTutor , a personalized web-based education system that learns the optimal sequence of teaching materials to show based on thestudent’s context and feedback about the previously shown teaching materials. In an experiment,they apply the eTutor system in the following scenario: The students have studied digital signalprocessing in the past. The role of eTutor is to recommend learning materials to the students withthe goal of refreshing their minds about discrete Fourier transform in the minimum amount of time.The e-tutor shows better performance compared to a random- and a fixed-selection rule.
III. Assessment and Evaluation
A key problem in learner assessment is to estimate how well they master each knowledge com-ponent/concept/skill from their responses to assessment questions. Related works can be broadlyclassified into two categories: (i) static models that analyze the generated data as learners takean assessment and thereby assuming that each learner’s knowledge remains constant during theassessment, and (ii) dynamic models that track learners’ progress throughout a (possibly long) periodas their knowledge levels evolve. Below we provide a short overview of each category. • Static models- Item response theory (IRT):
The basic 1PL IRT model characterizes theprobability that a learner answers a question correctly as P ( y i,j = 1) = σ ( a j − d i ) , Fig. 4. A detailed framework for course recommendation. where y i,j denotes the binary-valued graded response of learner j to question i , where 1 impliesa correct response and 0 otherwise. Moreover, a j ∈ R and d i ∈ R are scalars that correspond tothe learner’s ability and the question’s difficulty, respectively. Also, σ ( · ) is a link function thatis usually the sigmoid function or the inverse probit link function [25]. Later extensions include2PL IRT models that add another multiplicative scaling parameter. This parameter correspondsto the ability of each question differentiating high-capacity learners from low-capacity ones.Besides, 3PL IRT models add another scalar outside of the link function, which correspondsto the probability that an item can be guessed correctly. Finally, multidimensional IRT modelsuse vectors instead of scalars to parametrize the strengths and difficulties to capture multipleaspects of one’s ability [26]. Using the aforementioned models, one can (i) obtain relativelystable estimates of learners’ ability levels by denoising learners’ responses and (ii) estimate thequality of each assessment question. • Dynamic models- Knowledge tracing (KT):
KT models consist of two parts, learnerperformance model ( f ( · )) and learner knowledge evolution model ( g ( · )), as y t ∼ f ( a t ) , a t ∼ g ( a t − ) , where t denotes a discrete-time index. Early KT models such as Bayesian KT [27] treatsknowledge ( h t ) as a binary-valued scalar that characterizes whether or not a learner mastersthe (single) concept covered by a question. The performance and knowledge evolution models Fig. 5. Overview of the AKT method. We use IRT-based raw embeddings for questions and responses. We computecontext-aware representations of questions and responses using two encoders. We then use a knowledge retrieverto retrieve past acquired knowledge for each learner using a monotonic attention mechanism, which is used forperformance prediction. are simply noisy binary channels. Later, factor analysis-based KT models use a set of hand-crafted features such as the number of previous attempts, successes, and failures on each conceptto represent a learner’s knowledge levels [28]. These models require expert labels to associatequestions to concepts, resulting in excellent interpretability since they can effectively estimatethe knowledge level of each learner on expert-defined concepts. Recent KT models incorporatedeep learning, especially recurrent neural networks into the KT framework, where knowledgeis represented as a latent vector a t [29]. These models achieve state-of-the-art performance inpredicting future learner responses, although in some cases the advantage is not significantdespite paying the price of losing some interpretability [30].The existing learner assessment models have several bottlenecks. First, there are not many modelswith both the ability to achieve state-of-the-art performance in data fitting, (i.e., future performanceprediction) as well as feedback generation (i.e., providing interpretable feedback to learners andinstructors for downstream tasks such as personalization). Therefore, it is imperative to developnew deep learning-based models that not only inherit the flexibility of neural networks to accu-rately predict learner performance but also build in cognitive theory-inspired structures to promoteinterpretability and enable the generation of meaningful feedback. As an example, in the recentlydeveloped attentive knowledge tracing (AKT) model [31], visualized in Figure 5 , we combined state-of-the-art attention networks with cognitive theory-inspired modules. We used a monotonic attentionmechanism where weights exponentially decay over time and questions embeddings parametrized bythe 1PL IRT model to prevent overfitting. Experimental results show that AKT not only outperformsexisting KT models but also exhibits some interpretability; see [31] for details. Moreover, existing learner assessment models almost exclusively operate on graded learner responses; however, convert-ing raw learner responses to graded responses leads to considerable information loss. For multiple-choice questions, different distractor options are not created equal; choosing certain incorrect optionsover others might indicate that a learner exhibits a certain misconception. However, this informationis lost when the learner’s option choice is converted to a graded response. Moreover, due to its superiorpedagogical value, open-response questions are widely adopted; the specific open-ended response alearner enters contains rich information about her knowledge state. Therefore, it is vital to developmodels that go deeper than the graded response level and into the raw response level. These modelsenable personalization at even finer levels, e.g., after each step as a learner solves an open-endedmathematical problem step-by-step, and enable personalized education systems to attend to learnerdifficulties in a more timely manner.Another consideration in effective learner evaluation is that assessment and performance pre-diction models must be tailored to different learning environments and platforms. For example,accurate prediction of students’ future college performance based on their ongoing academic recordsis crucial to carry out effective pedagogical interventions so that on-time and satisfactory graduationis ensured. However, foretelling student performance in completing degrees (e.g., college programs)is significantly different from that for in-course assessment and intelligent tutoring systems. In whatfollows, we describe the most important reasons. • First, students differ tremendously in terms of backgrounds as well as the study domains(majors, specializations), resulting in different selected courses. Even if the courses are similar,the sequences in which the students take the courses might differ significantly. Therefore, akey challenge for training an effective predictor is to handle heterogeneous student data. Incontrast, solving problems in intelligent tutoring systems often follow routine steps that areidentical for all students. Similarly, predictions of students’ performance in courses are oftenbased on in-course assessments that are identical for all students. • Second, although the students often take several courses, not all of them are equally informativefor predicting the students’ future performance. Utilizing the student’s past performance in allcourses that he/she has completed not only increases complexity but also introduces noise in theprediction, thereby degrading the performance. For instance, while it is meaningful to considera student’s grade in ’Linear Algebra’ for predicting his/her grade in ’Linear Optimization’, thestudent’s grade in ’Chemistry Lab’ may have much weaker predictive power. However, the coursecorrelation is not always as obvious as in this example. Therefore, to enhance the accuracy ofperformance predictions, it is essential to discover the underlying correlation among courses. • Third, predicting student performance in a degree program is not a one-time task; rather,it requires continuous tracking and updating as the student finishes new courses over time. An important consideration is the following: The prediction shall be made based on not onlythe most recent snapshot of the student accomplishments but also the evolution of the studentprogress, which may contain valuable information to improve the prediction’s accuracy. However,the complexity can easily explode since even mathematically representing the evolution ofstudent progress itself can be a daunting task. Treating the past progress equally as the currentperformance when predicting the future may not be a wise choice either since old informationtends to be outdated.Finally, we would like to emphasize the following: Similar to the offline system, in AI-power person-alized education, the assessment does not remain limited to evaluating the performance of individualstudents in different tests in a single online education portal. Indeed, evaluation might be necessarynot only for individuals but also for a collection of students as well as other stakeholders suchas educators, policy-makers, and the providers of online education. In particular, fair and precisecomparison, analysis, and accreditation of online education portals, as well as the degrees andcertificates provided by such portals, are crucial. The reasons include the following: (i) Distanceeducation has grown into a broad industry in the past decade; (ii) The majority of the learnersrely on certificates of online classes as approval for obtaining the necessary knowledge and skills;(iii) Online education is inherently international and crosses the boundaries. Similar to improvingthe evaluation of students, AI and ML methods, together with bid data analysis, can assist inaccreditation and comparison of online portals and the issued degrees and certificates; this includes,e.g., comparing the average students’ performance with an online degree with that of traditional,yet accredited, degree. A detailed discussion of such topics have several perspectives, and thereforeit is out of the scope of this paper.
IV. Life-long Learning
Life-long learning emphasizes holistic education and the fact that learning takes place on anongoing basis from our daily interactions with others and with the world around us in differentcontexts. These include not only schools but also homes and workplaces, among several others.Because of its ongoing nature, making foresighted learning plans is crucial for life-long learning toachieve the desired outcome.In the school context, a specific challenge for developing a learning plan is the course sequencerecommendation in degree programs [21]. Recent studies show that the vast majority of collegestudents in the United States do not complete college in the standard time. Moreover, today,compared to a decade ago, fewer college students graduate in a timely manner. While several factorscontribute to students taking longer to graduate, such as credit losses in the transfer, uninformedchoices due to the low advisor-student ratios, and poor preparation for college, the inability ofstudents to attend the required courses is among the leading causes. If students select the courses myopically without a clear plan, they may end up in a situation where required subsequent coursesare offered (much) later, thereby (significantly) prolonging the graduation time. Hence, to accelerategraduation, students shall essentially select courses in a foresighted way while taking the coursesequences (shaped by courses being mandatory, elective, pre-requisite) into account. Moreover, it isvital to observe the timing in which the school offers various courses. More importantly, since thenumber and variety (in terms of backgrounds, knowledge, and goals) of students is expanding rapidly,the same learning path is unlikely to best serve all students. Therefore, it is crucially important totailor course sequences to students. To this end, it is necessary to learn from the performance ofprevious students in various courses/sequences to adaptively recommend course sequences for thecurrent students. Obviously, this depends on the student’s background and his/her completion statusof the program to maximize any of a variety of objectives including the time to graduation, grades,and the trade-off between the two. To make such plans, AI is a tool of great potential; however,designing AI technologies for personalized, foresighted, and adaptive course planning is challengingin several dimensions, as briefly described below. • First, course sequence recommendation requires dealing with a large decision space that growscombinatorially with the number of courses. • Second, there is a great deal of flexibility in course sequence recommendation since multiplecourses can be taken simultaneously while it is also subject to many constraints due to prereq-uisites and availability. • Third, any static course sequence is sub-optimal since the knowledge, experience, and perfor-mance of a student develops and evolves in the process of learning. • Last but not the least, students vary tremendously in backgrounds, knowledge, and goals.For example, in [21], we develop an automated course sequence recommendation system to addressthe aforementioned challenges. To reduce complexity and enabling tractable solutions, we solve theproblem in two steps, as illustrated in
Figure 6 : (i) The first step corresponds to offline learning, inwhich a set of candidate recommendation policies are determined to minimize the expected time tograduation or the on-time graduation probability using an existing dataset of anonymized studentrecords based on dynamic programming; (ii) The second step corresponds to online learning, in whichfor each new student, a suitable course sequence recommendation policy is selected depending onthis student’s background using the learned knowledge from the previous students. In other life-longlearning contexts (e.g., the workplace), while similar challenges may still be present, new challengesare likely to emerge and hence, foresighted learning plans must be tailored to the specific context.Recent research shows a significant gap between the lectures offered in schools and job require-ments, especially in emerging disciplines like data science. Soft skills such as communication andteamwork are often even more important than typical technical skills [32]. Future research on life- long learning shall bridge this gap. Indeed, there is a systematic demand for the research communityto identify and study the skills that significantly contribute to the professional perspectives insteadof maximizing achievement in schools. The educators can take advantage of the findings to adjustschool curricula and educational activities to better prepare students for the future. The centerpieceof possible approaches is to fuse a student’s school records with future employment outcomes, possiblytracked over a long period, as well as other data sources such as course syllabi and job postings, toidentify the crucial skills that extend from the education to profession. There is also a necessity forresearch in labor studies to conduct interviews with (i) employers to understand their requirements,(ii) job seekers to identify the skills they are keen to acquire, and (iii) training providers to clarify theskills that can be taught in a part-time or on-the-job way rather than through centralized educationalprograms, given workers’ real-life constraints. Course AvailabilityRecommendationCourse Completion Policy
Selection
Student DatabasePolicy ConstructionPolicy Database
Completed (Time-to-graduation, GPA)
Not completedBackground
AdvisorStudent
Online Step Offline Step
Fig. 6.
Illustration of course sequence recommendation.
V. Incentives and Motivation
So far, one crucial aspect of personalized education has been largely left aside, namely motivationand incentive-design. This is unfortunate as these factors significantly contribute to the learners’perseverance and engagement, thereby the overall students’ achievements. As such, they affect notonly the individuals but also the entire society in terms of the efficiency of resource expenditure.In educational sciences, motivation is regarded as a concept that involves several learning-relatedfeatures such as initiation, goal-orientation, intensity, persistence, and quality of behavior [9]. There-fore, as [33] describes, motivation is an unobservable dynamic process that is difficult to be directlymeasured but it is inferable from the observations. Similar to other crucial factors of successfuleducation such as talent and interest, motivation originates and is influenced by personal factorssuch as goals and beliefs. As such, it is reasonable to conclude that intelligent ’personalization’affects the motivation to a large extent. Motivation can be intrinsic or triggered by external factors. As such, various features of personalizededucation, such as recommending a proper series of content or creating educational networks, mightimplicitly improve the learner’s motivation by increasing the engagement level. Such efforts make thelearning experience more pleasant, thereby improving the learner’s satisfaction level. This is, however,insufficient. It is imperative to integrate direct motivating methods into personalized education andthe learning platform. To this end, in the following, we first describe a few frameworks which canaccommodate motivation and its relevant concepts appropriately. More can be found in [33]. • Behavioral Economics:
Any personalized education platform shall be able to appropriatelyconnect, interact, and interface with humans. Hence, the proper operation significantly dependson various features of the members of the target group that shapes their decision-makingbehavior. Indeed, a ’utility function’ is the most seminal computational model for the interestsof the learners. For a rational decision-maker, the utility function is conventionally increasingconcave and to be maximized. However, humans often demonstrate unusual patterns in theirutility functions and decision-making due to the following reasons: (i) Humans make mistakes,often due to inaccurate beliefs and imprecise predictions; (ii) Humans often act irrationally andbased on heuristics; (iii) Humans think and act in different manners as a result of their uniquebackground, including personality and experiences [34]. Behavioral economics accommodate andformalizes such aspects; Therefore, one can take advantage of behavioral economics for efficientincentive design and motivation in learning platforms [35]. • Self-Determination Theory:
This theory asserts that humans have an intrinsic urge to be self-autonomous, competent, and connected, concerning their environment [36]. While behavioraleconomics is appropriate to investigate the motivation resulted from external rewards, self-determination observes the motivation from an internal perspective. Indeed, any environment,including the learning platforms, that satisfies the aforementioned needs of human, awakes theintrinsic motivation, rendering external triggers rather unnecessary. As such, promoting intrinsicmotivation is significantly more effective than extrinsic motivation as it is often associated withlower cost compared to material-rewarding and has a long-lasting effect [33]. • Self-Efficacy Theory:
This concept corresponds to an individual’s confidence in her capabilityof performing a specific task to be undertaken, for example, learning in an online learningplatform or performing at a certain level [37]. Researchers show that humans constantly assesstheir self-efficacy, mainly based on the observed information from the environment and the pastexperiences [37]. Similar to the self-determination theory, self-efficacy considers the intrinsicmotivation, implying that a feeling of efficacious triggers the internal motivation feeling inlearners. Other relevant concepts include ’interest’ and ’goal-orientation’ [33].The main challenge is to utilize AI and ML to motivate the learners of a personalized learning platform, based on the aforementioned theories that formalize and explain human behavior. Toclarify this, consider the utility function of a learner in a personalized learning platform as an example[35]. The function quantifies the learner’s well-being while using the platform, and, consequently,her (future) engagement. Some learners exhibit hyperbolic preferences, overweighting the presentso much that future rewards are largely ignored. Some learners show strong reactions even to non-monetary rewards. Some learners demonstrate reference-dependent preferences, implying that theutility is largely determined by its distance from a reference point, for example, a pre-defined goal orthe average performance. By using ML and AI methods, the learning platform can take advantageof the available data and a learner’s feedbacks to estimate the utility function of that learner,hence predicting her reaction to potential triggers of incentive and motivation . Consequently, theplatform can adjust and allocate the reward among the learners efficiently and fairly. As anotherexample, consider the self-determination theory. Based on this theory, a sophisticated personalizedlearning platform guarantees choice, connectedness, and the feeling of competence for the learner.To this end, the design of recommendation tools based on AI and ML methods should allow forenough alternatives, both at micro and macro levels, to ensure autonomy. Moreover, the suggestedlearning content should be based on the learner’s feedback and the result’s of accurate assessment,to avoid inducing a feeling of incompetence in the learner. Besides, promoting network formationor establishing a link between coherent learners and intensive interaction results in connectedness.This is also in accordance with the self-efficacy theory, in the sense that by providing an appropriatefeedback and suitable side-information, the platform increase the positive belief of a learner in herability to perform well on a learning platform. VI. Building Learning Networks
A potential negative effect of personalized education, especially in an online environment, is a lossof peer interactions and of the sense of community that is usually present in traditional classrooms.Fortunately, the rise of online social networks seems to facilitate interaction and networking betweenteachers and learners, also the co-production of content both within and outside the classroom.Learning applications and pedagogy can also be built based on online social networks to bridgeformal and informal learning, also to promote peer interactions on both curricular and extra-curricular topics. Moreover, various education-related social networks have been created to facilitatecollaboration, post/answer questions, and share resources; however, a formal method to build theselearning networks and a deep understanding of their effectiveness are absent.The core of learning networks is peer interaction, which has important implications for personalizededucation when teaching resources are limited. For example, peer review serves as an effective andscalable method for assessment and evaluation when the number of students enrolled in a coursefar exceeds the number of teaching assistants. However, effective peer review in learning networks poses new challenges [38]. On the one hand, the peer reviewers have different intrinsic capabilities,which are often unknown. On the other hand, the peer reviewers can choose to exert different levels ofeffort (e.g., time and energy spent in reviewing), which is unobservable. Identifying unknown intrinsiccapabilities corresponds to the adverse selection problem in the game theory. A natural candidate forsolving this problem is to use matching mechanisms, i.e., assign reviewers to students. Existing workson matching mechanisms focus on one-shot peer interactions and design one-shot matching rules.However, their assumption does not hold in peer review systems, where the review quality dependscrucially on the reviewers’ effort. Motivating reviewers to exert high effort corresponds to the moralhazard problem in game theory. One way to address this problem is to use social norms, where eachpeer reviewer is assigned with a rating that summarizes her past behavior and recommended a ’norm’that rewards reviewer with good ratings and punishes those with bad ratings. However, existingworks on social norms assume that peer reviewers are homogeneous. This assumption does not holdin peer review systems because different reviewers have different intrinsic capabilities. Because a peerreviewer’s ultimate review quality is determined by her intrinsic capabilities and effort, designingeffective peer review systems in learning networks becomes significantly more challenging due to thepresence of both adverse selection and moral hazard. Therefore, new peer review system designsshall simultaneously solve both problems so that peer reviewers find it in their self-interest to exerthigh effort and receive ratings that truly reflect their capabilities.Another primary function of learning networks is to foster learning content co-production andsharing. Building such learning networks is vastly different from building traditional networks suchas computer networks and transportation networks, as in learning networks, individual learners createand maintain the links. Because links permit the acquisition and dissemination of learning content,it is theoretically intriguing and practically valuable to have a deeper understanding of the networksthat are more likely to be formed by self-interested learners. Game theory is a useful tool to formulateand understand the strategic behavior of learners. The formulation must capture the heterogeneityof learners in terms of goals, capabilities, costs, and self-interest nature [39]; That is, each learnerintends to maximize its benefit from content co-production and sharing minus whatever cost it paysto establish links. Our prior work [40] studies the endogenous formation of networks by strategic,self-interested agents who benefit from producing and disseminating information. The results showedthat the typical network structure that emerges in equilibrium displays a core-periphery structure,with a smaller number of agents at the core of the network and a larger number of agents at theperiphery of the network. Furthermore, we established that the typical networks that emerge areminimally connected and have short network diameters, which are independent of the size of thenetwork. In other words, theoretical results show that small diameters tend to make informationdissemination efficient and minimal connectivity leads to minimizing the total cost of constructing the network. These results are consistent with the outcome of numerous empirical investigations.Such theoretical analysis and tools are essential to guide building learning networks. Also, based onthis analysis, one can create protocols to motivate selfish learners to take actions that promote thesystem-wide utility.Future research in learning networks hinges on understanding the knowledge flow between stu-dents via peer interaction. Such an understanding enables educators to effectively modulate peerinteractions and to encourage the interactions that promote peer learning. Peer learning is especiallyvaluable as education extends to more diverse settings, such as remote online learning during theCOVID-19 pandemic. In such settings, it is hard for instructors to moderate learning activitiesremotely; hence, peer learning through online course discussion forums becomes essential. Therefore,it is vital to understand the interaction tendencies and students’ behaviors in these discussion forums[41], understanding the flow of knowledge by combining discussion forum activities with grades,identifying the factors that enhance knowledge flow, and designing automated strategies to moderatestudent activities when necessary. VII. Diversity, Fairness, and Biases
Experimental studies show that AI-driven personalization such as student assessment, feedback,and content recommendation improve the overall learning outcomes; nonetheless, certain studentsubgroups may benefit more than other subgroups due to the biases that exist in the training data[42]. This imbalance jeopardizes students that are already under-served in particular since they oftenhave less access to advanced, digitized educational systems and are less frequently represented indatasets collected by these systems [43]. Therefore, it is essential to develop AI tools that promotefairness among learners with different backgrounds, thereby making education more inclusive for thenext generations.To mitigate biases and to promote fairness and equity in AIs, currently, the researchers paysignificant attention to developing approaches that promote fairness, primarily in the context ofpredictive algorithms. • The first major research problem studied is how to properly define fairness; see [44] and referencestherein for an overview. Many definitions of fairness exist, including individual fairness thatrequires that users with similar feature values be treated similarly, parity in the predictedprobability of each outcome across user groups (drawn using sensitive attributes), parity in thepredicted probability of each outcome given actual comes and regardless of sensitive attributes,and counterfactual fairness, which requires that the predicted outcome for each user remainmostly unchanged if the sensitive attribute changes. • The second major research problem is to develop methods that enforce fairness in predictivealgorithms. Existing approaches include preprocessing the data to select only the fair features as input to algorithms, and post-processing the output of algorithms to balance across usergroups. The most promising approach is to impose regularizers and constraints while trainingpredictive algorithms. These methods result in better fairness at the expense of sacrificing someclassification accuracy; however, they are empirically shown to obtain better tradeoffs betweenfairness and accuracy than other fairness-promoting methods.Promoting fairness and equity is a crucial necessity of education that requires a comprehensiveapproach to be fulfilled: We need to not only design fair personalization algorithms but also developsystematic principles and guidelines for their application in practice; In other words, we need a setof tools to regulate the use of AI algorithms.Finally, despite great promises, AI-driven personalization in education can also bring risks thathave to be closely monitored and controlled. Recently, there have been calls for a food and drugadministration (FDA)-type framework for other AI applications such as facial recognition [45]. It isessential to establish a similar ecosystem in education with a set of legislation and regulations aroundissues of data ownage, sharing, continuous performance monitoring, and validation, to regulateevery step of the process, from ensuring the diversity and quality of the collected data, developingalgorithms with performance guarantees across different educational settings, to identifying misuseand implementing a fail-safe mechanism. VIII. COVID-19 and AI-Enabled Personalized Education
Among its several other adverse effects, the COVID-19 pandemic has disrupted or interruptedthe functionality of conventional education systems around the globe. Not surprisingly, the studentsperceive the bitterness of this adverse effect at various degrees, depending on several factors suchas country/region, family status, and individual characteristics. The complications vary over a largespectrum and include reduced learning ability, depression, loss of concentration, and a decline inphysical fitness. The complications mainly arise due to spending less- or no time at school, wherethe students receive educational materials and support in learning, interact with their peers andteachers, develop incentives, and are evaluated. Besides, many students cannot take full advantageof replacements such as online materials, e.g., in the absence of an appropriate technological device orreliable internet connection, or suitable learning ambient at home. As a consequence of its vitality,the impacts of COVID-19 on education has attracted a great deal of attention. For example, in[46], the authors describe the influence of pandemic-triggered growth in online learning on student’sperformance and equity. Another example is [47], which also provides suggestions for policy-makers tocompensate for the pandemic’s negative educational consequences. Moreover, some research workssuch as [48] study domain-specific educational effects of the pandemic and evaluate the availablesolutions.Personalized- and distance education have already been trending in the past decade; Still, COVID-
19 has urged both public and private sectors to rapidly increase the investigations into researchand development in this area to earn individual- and/or social profit. For example, the pandemichas boosted the usage of online learning tools for signal processing education, especially at theundergraduate levels. These include, for example, web-based laboratories for digital signal processing[49], and online machine learning education modules [50]. While it is essential to carefully study sucha tremendous push towards revolutionizing education from several perspectives, in the scope of ourpaper, we confine our attention to the role and influence of AI and ML.As described previously, AI and ML have a great potential to enhance online education in differentways, e.g., through improving the quality of learning materials, enabling fairness and diversity,generating proper tests, and allowing to build knowledge networks. That side is a universal aspectof applying AI and ML methods in distance and asynchronous education regardless of the currentpandemic; nevertheless, such methods can additionally assist in accelerating rebuilding the educationsystems and in mitigating the pandemic’s detrimental effects. For example, by using ML methods onthe available data, policy-makers can classify the students based on the exposure to the educationaleffects of a pandemic; using such classification, one can allocate resources efficiently while satisfyingfairness constraints. As another example, b taking advantage of ML methods, one can optimize theschool closure plan based on different features such as neighborhood, size, grade, and the like.
IX. Summary and Conclusion
Enabling ’personalized education’ is one of the most precious merits of AI concerning education.This paradigm significantly improves the quality of education in several dimensions by adapting tothe distinct characteristics and expectations of each learner such as personality, talent, objectives,and background. Besides, online education is of the utmost value under abnormal circumstancessuch as the COVID-19 outbreak or natural disasters. Indeed, conventional education requires sig-nificantly more resources than the online format concerning educational space, scheduling, andhuman resources, which makes it prone to failure with even a small shift in conditions. As such,emerging alternatives are inevitable. Despite having the potential of a revolutionary transformationfrom traditional education to modern concepts, personalized education is associated with severalchallenges. We discussed such challenges, provided a brief overview of the state-of-the-art research,and proposed some solutions.
Table II summarizes some of the future research directions.
References [1] Y. Zhong, C. Jiang, W. Xu, and J. Li, “Discourse level factors for sentence deletion in text simplification,” in
Proc. AAAI Conference on Artificial Intelligence , Feb. 2020.[2] M. Sachan, A. Dubey, E. H. Hovy, T. M. Mitchell, D. Roth, and E. P. Xing, “Discourse ineffectiveness of crowd-sourcing on-demand tutoring from teachers in online learning platforms multimedia: A case study in extractinggeometry knowledge from textbooks,”
Computational Linguistics , vol. 45, no. 4, pp. 627–665, 2020. TABLE II
Some Research Directions for AI-based Personalized Education
Challenge Description Some References
Content Production/Recommendation. Personalized and profession-oriented production,recommendation, and maintenance of contents [51], [23], [22], [20].Evaluation and Assessment. Performance comparison in personalized education,testing without information loss, accreditation [31], [30], [16], [17], [18].Lifelong Learning. Continuous education and additional qualificationfor improvement and pivots in profession [21].Incentives. Internal and external motivation for learning,gamification, rewarding, inducing confidence [33], [35].Networking and Interaction. Inducing learning networks, forming coalitionsfor efficient learning, imitating teacher feedback [38], [41].Diversity and Fairness. Equal access to quality online education,avoiding biases in platform development [44], [42]. [3] I. Manickam, A. S. Lan, and R. G. Baraniuk, “Contextual multi-armed bandit algorithms for personalizedlearning action selection,” in , 2017, pp. 6344–6348.[4] K.I. Ghauth and N.A. Abdullah, “Learning materials recommendation using good learners’ ratings and content-based filtering,”
Education Tech Research and Development , , no. 58, pp. 711–727, 2010.[5] C. Magno, “Demonstrating the difference between classical test theory and item response theory using derivedtest data,”
CSN: General Cognitive Social Science (Topic) , vol. 1, 06 2009.[6] Z. Pardos and N. Heffernan, “Modeling individualization in a Bayesian networks implementation of knowledgetracing,” in
Proc. International Conference on User Modeling, Adaptation, and Personalization , June 2010, pp.255–266.[7] M. Sharples, “The design of personal mobile technologies for lifelong learning,”
Computers and Education , vol.34, no. 3, pp. 177 – 193, 2000.[8] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks:A review,”
Neural Networks , vol. 113, pp. 54 – 71, 2019.[9] W. Grove and L. Hadsell,
Open Learning Environments , chapter Incentives and Student Learning, pp. 1511–1517,Springer, 01 2012.[10] P. Buckley and E. Doyle, “Gamification and student motivation,”
Interactive Learning Environments , vol. 24,no. 6, pp. 1162–1175, 2016.[11] K.-C. Chen, S.-J. Jang, and R. M. Branch, “Autonomy, affiliation, and ability: Relative salience of factors thatinfluence online learner motivation and learning outcomes,”
Knowledge Management and E-Learning , vol. 2, no.1, pp. 1162–1175, 2010.[12] A. P. Rovai, “Building and sustaining community in asynchronous learning networks,”
The Internet and HigherEducation , vol. 3, no. 4, pp. 285 – 297, 2000.[13] P. Vesely, L. Bloom, and J. Sherlock, “Key elements of building online community: Comparing faculty andstudent perceptions,”
MERLOT Journal of Online Learning and Teaching , vol. 3, 01 2007.[14] J. Gardner, C. Brooks, and R. Baker, “Evaluating the fairness of predictive student models through slicinganalysis,” in
Proceedings of the 9th International Conference on Learning Analytics & Knowledge , 2019, pp.225–234.[15] S. Yao and B. Huang, “Beyond parity: Fairness objectives for collaborative filtering,” in
Advances in NeuralInformation Processing Systems , 2017, pp. 2921–2930.[16] Z. Wang, A. S. Lan, W. Nie, A. E. Waters, P. J. Grimaldi, and R. G. Baraniuk, “QG-net: A data-driven question generation model for educational content,” in Proc. ACM Conference on Learning at Scale , June 2018, pp. 1–10.[17] M. Yasunaga and J. D Lafferty, “TopicEq: A joint topic and mathematical equation model for scientific texts,”in
Proc. AAAI Conference on Artificial Intelligence , 2019, pp. 7394–7401.[18] P. Thanaporn and N. T. Heffernan, “Effectiveness of crowd-sourcing on-demand tutoring from teachers in onlinelearning platforms,” in
Proc. ACM Conference on Learning at Scale , Aug. 2020, pp. 1–10.[19] B. Zylich, A. Viola, B. Toggerson, L. Al-Hariri, and A. S. Lan, “Exploring automated question answering methodsfor teaching assistance,” in
Proc. International Conference on Artificial Intelligence in Education (AIED) , July2020.[20] A. S. Lan and R. G. Baraniuk, “A contextual bandits framework for personalized learning action selection,” in
Proc. International Conference on Educational Data Mining , June 2016, pp. 424–429.[21] J. Xu, T. Xing, and M. Van Der Schaar, “Personalized course sequence recommendations,”
IEEE Transactionson Signal Processing
Proceedings of the National Academy of Sciences , vol. 116, no. 10,pp. 3988–3993, 2019.[24] C. Tekin, J. Braun, and M. van der Schaar, “etutor: Online learning for personalized education,” in , 2015, pp. 5545–5549.[25] F. Lord,
Applications of Item Response Theory to Practical Testing Problems , Erlbaum Associates, 1980.[26] M. D. Reckase,
Multidimensional Item Response Theory , Springer, 2009.[27] M. Yudelson, K. Koedinger, and G. Gordon, “Individualized Bayesian knowledge tracing models,” in
Proc.International Conference on Artificial Intelligence in Education , July 2013, pp. 171–180.[28] P. Pavlik Jr, H. Cen, and K. Koedinger, “Performance factors analysis–A new alternative to knowledge tracing,”in
Proc. International Conference on Artificial Intelligence in Education , June 2009, pp. 531–538.[29] C. Piech, J. Bassen, J. Huang, S. Ganguli, M. Sahami, L. J. Guibas, and J. Sohl-Dickstein, “Deep knowledgetracing,” in
Proc. Conference on Advances in Neural Information Processing Systems , Dec. 2015, pp. 505–513.[30] M. Khajah, R. Lindsey, and M. Mozer, “How deep is knowledge tracing?,” in
Proc. International Conferenceon Educational Data Mining , July 2016, pp. 94–101.[31] A. Ghosh, T. Heffernan, and A. S. Lan, “Context-aware attentive knowledge tracing,” in
Proc. ACM SIGKDDInternational Conference on Knowledge Discovery and Data Mining , Aug. 2020.[32] K. B¨orner, O. Scrivner, M. Gallant, S. Ma, X. Liu, K. Chewning, L. Wu, and J. A. Evans, “Skill discrepanciesbetween research, education, and jobs reveal the critical need to supply soft skills for the data economy,”
Proceedings of the National Academy of Sciences , vol. 115, no. 50, pp. 12630–12637, 2018.[33] M. Hartnett,
Motivation in Online Education , chapter The Importance of Motivation in Online Learning, pp.5–32, Springer, 01 2016.[34] S. Maghsudi and M. Davy, “Computational models of human decision-making with application to the internetof everything,”
IEEE Wireless Communications , pp. 1–8, 2020.[35] S. D. Levitt, J. A. List, S. Neckermann, and S. Sadoff, “The behavioralist goes to school: Leveraging behavioraleconomics to improve educational performance,”
American Economic Journal: Economic Policy , vol. 8, no. 4,pp. 183–219, November 2016.[36] E. Deci and R. Ryan, “Motivation, personality, and development within embedded social contexts: An overviewof self-determination theory,”
The Oxford Handbook of Human Motivation , 01 2012.[37] A. Bandura,
Self-efficacy: The exercise of control , Freeman, 1997. [38] Y. Xiao, F. D¨orfler, and M. Van Der Schaar, “Incentive design in peer review: Rating and repeated endogenousmatching,” IEEE Transactions on Network Science and Engineering , vol. 6, no. 4, pp. 898–908, 2018.[39] S. Maghsudi and M. van der Schaar, “Distributed task management in cyber-physical systems: How to cooperateunder uncertainty?,”
IEEE Transactions on Cognitive Communications and Networking , vol. 5, no. 1, pp. 165–180, 2019.[40] Y. Zhang and M. van der Schaar, “Strategic networks: Information dissemination and link formation amongself-interested agents,”
IEEE Journal on Selected Areas in Communications , vol. 31, no. 6, pp. 1115–1123, 2013.[41] A. S. Lan, J. Spencer, Z. Chen, C. Brinton, and M. Chiang, “Personalized thread recommendation for MOOCdiscussion forums,” in
Proc. European Conf. Mach. Learn. and Principle Knowl. Discov. Databases , Sep. 2018.[42] J. Reich and M. Ito, “From good intentions to real outcomes: Equity by design in learning technologies,”
Irvine,CA: Digital Media and Learning Research Hub , 2017.[43] S. Doroudi and E. Brunskill, “Fairer but not fair enough on the equitability of knowledge tracing,” in
Proceedingsof the 9th International Conference on Learning Analytics & Knowledge . ACM, 2019, pp. 335–339.[44] P. Gajane and M. Pechenizkiy, “On formalizing fairness in prediction with machine learning,” arXiv preprintarXiv:1710.03184 , 2017.[45] E. Learned-Miller, V. Ordonez, J. Morgenstern, and J. Buolamwini, “Facial recognition technologies in the wild:A call for a federal office,” online: https://people.cs.umass.edu/ elm/papers/FRTintheWild.pdf.[46] E. Garcia and E. Weiss, “Covid-19 and student performance, equity, and u.s. education policy,” 2020.[47] G. Di Pietro, F. Biagi, P. D. Mota Da Costa, Z. Karpinski, and J. Mazza, “The likely impact of covid-19 oneducation: Reflections based on the existing literature and recent international datasets,”
Publications Office ofthe European Union (online) , 2020.[48] Z. I. Almarzooq, M. Lopes, and A. Kochar, “Virtual learning during the COVID-19 pandemic: A disruptivetechnology in graduate medical education,”
Journal of the American College of Cardiology , vol. 75, no. 20, pp.2635–2638, 2020.[49] A. Dixit, U. S. Shanthamallu, A. Spanias, V. Berisha, and M. Banavar, “Online machine learning experimentsin HTML5,” in
IEEE Frontiers in Education Conference , 2018, pp. 1–5.[50] U. S. Shanthamallu, S. Rao, A. Dixit, V. S. Narayanaswamy, J. Fan, and A. Spanias, “Introducing machinelearning in undergraduate dsp classes,” in
International Conference on Acoustics, Speech and Signal Processing ,2019, pp. 7655–7659.[51] N. J. Cepeda, N. Coburn, D. Rohrer, J. T. Wixted, M. C. Mozer, and H. Pashler, “Optimizing distributedpractice: Theoretical analysis and practical implications,”