KnowledgeCheckR: Intelligent Techniques for Counteracting Forgetting
Martin Stettinger, Trang Tran, Ingo Pribik, Gerhard Leitner, Alexander Felfernig, Ralph Samer, Muesluem Atas, Manfred Wundara
KK N OW L E D G E C H E C K
R: Intelligent Techniques forCounteracting Forgetting
Martin Stettinger and Trang Tran and Ingo Pribik and Gerhard Leitner and Alexander Felfernig and Ralph Samer and M ¨usl ¨um Atas and Manfred Wundara Abstract.
Existing e-learning environments primarily focus on theaspect of providing intuitive learning contents and to recommendlearning units in a personalized fashion. The major focus of theK
NOWLEDGE C HECK
R environment is to take into account forget-ting processes which immediately start after a learning unit has beencompleted. In this context, techniques are needed that are able to pre-dict which learning units are the most relevant ones to be repeated infuture learning sessions. In this paper, we provide an overview of therecommendation approaches integrated in K
NOWLEDGE C HECK
R.Examples thereof are utility-based recommendation that helps toidentify learning contents to be repeated in the future, collaborativefiltering approaches that help to implement session-based recommen-dation, and content-based recommendation that supports intelligentquestion answering. In order to show the applicability of the pre-sented techniques, we provide an overview of the results of empiricalstudies that have been conducted in real-world scenarios.
The concept of inverted learning is gaining momentum in differenttypes of educational settings [13]. The one-way delivery of infor-mation is replaced by face-to-face interaction with work in smallgroups and a clarification focus. The major focus of the K
NOWL - EDGE C HECK
R environment is to provide intelligent techniques thatsupport inverted learning scenarios. On the basis of recommenda-tion functionalities, the system is able to propose learning contentand questions that enable students (learners) to better focus on topicswhere they need to catch up. Also, teachers have better insights intothe performance of students and thus can immediately adapt their fo-cus in onsite teaching sessions. K NOWLEDGE C HECK
R is based onrecommendation approaches that support learning-related tasks suchas scheduling repetition cycles , recommending questions and knowl-edge units , and supporting Q&A scenarios . In this paper, we providean overview of K NOWLEDGE C HECK
R recommendation approachesand report the results of empirical studies that show in which wayrecommenders can improve the quality of learning.A major focus of K
NOWLEDGE C HECK
R is the provision of tech-niques that help counteracting forgetting [10]. Our motivation to de-velop such techniques is based on an empirical study conducted with SelectionArts, email: [email protected] Graz University of Technology, email: { ttran, afelfernig, rsamer, mu-atas } @ist.tugraz.at Flex Austria, email: ingo.pribik@flex.com University of Klagenfurt, email: [email protected] Information Technology Department, Villach, Austria, email: [email protected] N=70 companies in Austria from diverse domains such as financialservices, software development, production, transport, telecommuni-cations, and higher education. Participants of the study ranged fromlower, medium, up to higher management. On an average, the studyparticipants reported to spend around to answerquestions of colleagues although these colleagues should already beable the answer the questions, since they visited topic-related edu-cational programs. The major related knowledge categories are sum-marized in Table 1.We experienced similar results in university contexts where, forexample, PhD project relevant knowledge has to be ”manually”transferred a couple of times to assure that the knowledge is avail-able when needed. Examples thereof are issues such as how to writepapers, what are the correct formulations to explain an example, andhow to perform a logical proof. In our study, 91.43% of the partici-pants agreed that mechanisms that help counteracting the forgettingof company-relevant knowledge are extremely important and 97.24%mention that a personalized knowledge transfer for counteracting for-getting is important for the company. These were major motivationsthat lead to the development of K
NOWLEDGE C HECK R. category example supportproducts what are the new product features? 43.48%norms how to communicate with colleagues? 33.33%laws general data protection regulation 36.23%internal processes how to behave during an evacuation? 34.78%business processes production, customer complaints 65.22% Table 1.
Major knowledge categories which could profit from techniquesfor counteracting forgetting.
The major contributions of this paper are the following: we showhow recommendation technologies can be applied to (1) counteractforgetting processes, (2) recommend relevant learning contents, and(3) support Q&A scenarios in an intelligent fashion. Furthermore, wereport initial results of empirical studies that show the applicabilityand business relevance of our approach. An overview of existing in-dustrial and university-level deployments of K
NOWLEDGE C HECK
Ris provided in Table 2. domain description
Table 2.
Existing real-world deployments of K
NOWLEDGE C HECK R. The remainder of this paper is organized as follows. In Section2, we provide an overview of K
NOWLEDGE C HECK
R recommenda-tion technologies. Thereafter, in Section 3 we provide examples ofthe system user interface and discuss the provided functionalities. In a r X i v : . [ c s . I R ] F e b ection 4, we summarize the results of user studies that show theapplicability and business relevance of the K NOWLEDGE C HECK
Renvironment. In Section 5, we provide an overview of related work.We conclude the paper with a discussion of future work in Section 6. K NOWLEDGE C HECK
R recommendation approaches support differ-ent goals which can be summarized as (1) recommending questionsequences [17] (following the paradigm of test-enhanced learning[18]), (2) recommending questions for repetition purposes in order tobe able to counteract forgetting [10], and (3) recommending further learning units that might be of interest for the user [16]. In K
NOWL - EDGE C HECK
R, such scenarios are supported by recommendationtechniques. Questions are recommended (1) for learning purposes when users start to engage in a learning process and (2) for repetitionpurposes after initial learning has been completed. The former is sup-ported by session-based recommendation that guides a user through alearning process with increasing question complexity, the latter by a utility-based recommendation approach that identifies questions witha higher probability of already being forgotten.
Session-based Recommendation . In many application scenarios,users of K
NOWLEDGE C HECK
R prefer to stay anonymous espe-cially when using the system the first time. In such scenarios,not much information regarding the knowledge level and domain-specific experiences of a user is available. In K
NOWLEDGE C HECK
R,a session-based recommendation approach [20] is provided wherealready completed similar (nearest neighbor) sessions (sessions thatshow a similar user interaction behavior) are applied to recommendthe next questions to the current (anonymous) user. In K
NOWL - EDGE C HECK
R, the session-based approach is based on collabora-tive filtering [4]. A simplified example of the approach is depicted inTable 3, where sequences of questions answered by other users arestored in a log. The underlying idea is to find sequences (rankingsof questions) which are easy to complete where a question rankingis determined on the basis of the question selection behavior of oneor a set of nearest neighbors. This goal can be achieved with col-laborative filtering, since (implicit) dependencies between questions(e.g., question x is a precondition of y ) can be taken into account bylearning from similar sessions.In the example depicted in Table 3, the user in the current session s c has already successfully answered the questions { q , q , q } inthe order [ q , q , q ] but did not answer the questions { q , q } . Inthis context, K NOWLEDGE C HECK
R tries to figure out the ordering(ranking) in which the unanswered questions should be presented tothe user. Following a session-based recommendation approach, thesystem identifies the n-nearest neighbors and derives a question se-quence that might be of relevance for the user.In K
NOWLEDGE C HECK
R, the most similar sessions are used topredict questions of potential relevance to the current (anonymous)user. First, the similarity between the session s c (the current session)and a session s a can be determined on the basis of Formula 1 where correct ( q i , s x ) indicates whether question q i has been correctly an-swered in session s x and Q denotes the complete set of questionsin a specific K NOWLEDGE C HECK
R application. In our example, sim ( s c , s ) = 1 . since there is a complete overlap in the termsof correct questions already answered in s c . sim ( s a , s c ) = | q i ∈ Q : correct ( q i , s a ) ∧ correct ( q i , s c ) || q i ∈ Q | (1) Formula 2 helps to determine an overall evaluation of question q in the context of the current session s c with regard to its relevance forthe user. In this context, r ( q, s i ) denotes the ranking of question q inthe session s i where SNN denotes the n-nearest neighbor sessionsof session s c (the current session). Assuming SNN = { s } (for thepurpose of our example, we follow a 1-nearest neighbor approach),the overall evaluations of the two up-to-now unanswered questionswould be eval ( q , s c ) = 5 and eval ( q , s c ) = 4 where r ( q , s ) =5 and r ( q , s ) = 4 . eval ( q, s c ) = Σ s i ∈ SNN ( s i (cid:54) = s c ) r ( q, s i ) × sim ( s i , s c ) | SNN | (2)Finally, Formula 3 helps to determine a prediction for the rank-ing of question q in the context of session s c . The rank of thelast question answered within the scope of session s c is 3, i.e., currentqrank ( s c ) = 3 . Furthermore, rank ( eval ( q , s c )) = 1 and rank ( eval ( q , s c )) = 2 . As a consequence, pred ( q , s c ) = 3 + 2 and pred ( q , s c ) = 3 + 1 which results in the recommendation of q as the next question to be posed to the user in s c . If a user provides awrong answer within the scope of a learning session, the correspond-ing question is dropped from the list of recommended questions fora specific time period (the default setting is 20 minutes). pred ( q, s c ) = currentqrank ( s c ) + rank ( eval ( q, s c )) (3) session q q q q q s s s s s c Table 3.
A simple interaction log in session-based recommendation. Thetable entries represent session-specific orderings of posed questions q i , ques-tion marks represent still unknown orderings. In our simplified example, the nearest neighbor of session s c issession s , since the order in which questions have been answered bythe users are the same. The ordering for the next questions that willbe recommended by the system is [ q , q ]. This session-based collab-orative approach is used to support the ramp-up in situations whereusers interact with the learning application maybe the first time andprefer to apply K NOWLEDGE C HECK
R in an anonymized fashion.After the knowledge level of individual users becomes more trans-parent, K
NOWLEDGE C HECK
R is able to switch to a utility-basedrecommendation approach where the utility of individual questionsis estimated depending on time intervals since questions have beenanswered correctly the last time.
Utility-based Recommendation . In K
NOWLEDGE C HECK
R,utility-based recommendation [8] is applied to implement functionsthat help counteracting forgetting . The underlying idea is thatanswering the same question repeatedly within specific time inter-vals helps to consolidate the learning material [10]. Furthermore,utility-based recommendation determines a ranking where the mostrelevant repetitions are presented first. Users under time pressurecan thus focus on the most relevant topics.The functions used to determine questions of relevance are thefollowing. The relevance ( rel ) of a question q for a user u is de-fined as the complement of the share of correct answers comparedto the number of answers to question q . This factor is weighted by time aspect, i.e., the more days already passed since the last timethe question q has been answered by user u ( dayssince ( q, u ) ),the higher the corresponding relevance since the probability becomeshigher that a user is not able to answer the question correctly. In thiscontext, daystoforget ( q ) represents the assumption that after x days the correct answer will have been forgotten. This value can bepre-specified when defining a question or approximated based on his-torical data. Equation 4 represents a basic way of ranking questions. rel ( q, u ) = (1 − correctans ( q, u ) totalans ( q, u ) ) × dayssince ( q, u ) daystoforget ( q ) (4)Equation 5 is an extension of Equation 4 which additionally takesinto account the aspects of importance and complexity of a question(see the Equations 6 and 7). This means that the higher the impor-tance of a question and the lower the complexity of a question, thehigher the probability that this question will be recommended to thecurrent user. In this context, complexity and importance can be es-timated on the basis of data related to the global share of correctanswers to a specific question q and the average feedback of usersregarding the importance level of question q . The higher the impor-tance and the lower the complexity, the higher the relevance of thequestion for a specific user. The underlying idea is that knowledgeabout simple contents/questions is the precondition for answeringmore complex ones. rel (cid:48) ( q, u ) = rel ( q, u ) × importance ( q ) complexity ( q ) (5) complexity ( q ) = 1 − ( correctans ( q ) + 1 totalans ( q ) + 1 ) (6) importance ( q ) = Σ feedbacks ( q ) i =1 feedbackval ( i ) feedbacks ( q ) + 1 (7) Content-based Recommendation . In learning apps with a largeamount of questions, content-based recommendation [16] is used tosupport intelligent Question & Answering (Q&A) which is an or-thogonal way to exploit questions and answers stored in K
NOWL - EDGE C HECK
R. The underlying scenario is, for example, the fol-lowing: a user of a K
NOWLEDGE C HECK
R learning application on model-based diagnosis is currently preparing for the exam relatedto the course. Just two hours before the exam, the question comesto his/her mind, which diagnosis approach can guarantee the re-trieval of minimal cardinality diagnoses . Such queries can be en-tered to the search interface of the system. On the basis of the querystring (e.g., which diagnosis approach does support minimal cardi-nality? ), a content-based recommender determines the similarity be-tween question features and corresponding query features. The an-swers to questions most similar to the query are then shown to thecurrent user. In order to determine the similarity between the query q c posed by the current user and a question q i ∈ Q , the followingbasic similarity metric is applied (see Formula 8) [5, 16]. sim ( q i , q c ) = 2 × | features ( q i ) ∩ features ( q c ) || features ( q i ) | + | features ( q c ) | (8)A simplified example of the content-based recommendation ap-proach in K NOWLEDGE C HECK
R is given in Table 4. Note that suchrecommendation services are provided in individual learning appli-cations but as well on the global level to support situations whereusers are not completely sure which of the available learning appli-cations could answer their questions. In the following example (Ta-ble 4), we assume that the user poses the query in the context of a specific learning application ( model-based diagnosis ). In our ex-ample, the question with the highest similarity to the query q c is q ( sim ( q , q c ) = × = 0 . ). Consequently, the answer speci-fied for q would be shown as answer for q c . For further details oncontent-based recommendation approaches we refer to [16]. question question features (and question features) q conflict, algorithm, QuickXPlain q FastDiag, algorithm, time, complexity q FastDiag, algorithm, space, complexity q hitting, set, search, tree, breadth, first, minimal, cardinality q hitting, set, conflict, symmetry q minimal, cardinality, diagnosis, search query ( q c ) diagnosis, approach, support, minimal, cardinality Table 4. K NOWLEDGE C HECK
R: a simple example of a content-based rec-ommendation setting.
NOWLEDGE C HECK
R User Interfaces K NOWLEDGE C HECK
R provides the possibility to create individuallearning apps (see Figure 1) that consist of contents such as moviesand slides and a corresponding set of questions that can be used forpersonal self tests (knowledge checks), learning sessions, exercises,exams, and competitions. The supported question types are multiplechoice , sequencing tasks , text completion tasks , and image analysistasks (see also Figure 2). For each question type, a correspondingexplanation can be defined that is shown (if activated) if a user is notable to provide a correct answer to a question. Figure 1. K NOWLEDGE C HECK
R view on learning apps.
The discussed recommendation approaches (see Section 2) areuseful especially in the context of learning sessions where users tryto answer questions in order to improve their knowledge level inspecific categories. Since questions have different knowledge lev-els, the recommender system helps a user to focus on questionswith a high probability of being answered before being forwardedto more complex ones. Figure 1 provides an overview of a K
NOWL - EDGE C HECK
R list of learning applications – one example of suchan application is the
Anatomy Guide app which is used in medicalomains. Registered users dispose of the additional service of repet-itive recommendations where questions already posed in the past areposed again in order to achieve the goal of counteracting the forget-ting curve [15]. Each learning app can provide learning content in apersonalized fashion and also proposes questions a user should try toanswer in the next learning iteration [18].As already mentioned, K
NOWLEDGE C HECK
R provides differentways of asking questions. Figure 2 provides an example of an imagerecognition task where the task of a user is to answer a medical ques-tion by selecting the corresponding image areas which represent theanswer. If a question could not be answered correctly, a correspond-ing explanation can be shown – in the case of images, an explanationis a visualization of the correct answer areas in the image (includinga textual explanation as to why the shown area is the correct one).
Figure 2. K NOWLEDGE C HECK
R graphical interaction mode (find the ar-eas in question): an example from the medicine domain where heart parts inquestion have to be identified.
Finally, K
NOWLEDGE C HECK
R includes a user interface wherethe expertise of the community and also of individual users can beanalyzed. The interface supports a fine-grained analysis of criticalknowledge areas where there is a need to improve the communityknowledge or the knowledge of individual users (if the parametriza-tion of system allows this). The analysis section entails a motiva-tional aspect since, for example, the personal comparison with thewhole community immediately leads to more system interaction withthe goal to be at least as good or even better than the average per-formance of the community. K
NOWLEDGE C HECK
R also providesmechanisms to directly update participants of a learning applicationwith regard to new contents and questions. This update channel canbe configured in terms of the way notifications are explained (seeSection 4) and the frequency of knowledge updates.
As already mentioned, K
NOWLEDGE C HECK
R has already been de-ployed and is applied in a couple of application scenarios (see Table2). The system is applied for various purposes out of which we willdiscuss a couple of aspects in the following.
Figure 3. K NOWLEDGE C HECK
R view on the development of the personaland community knowledge level.
Public Administration.
In public administration, the system is ap-plied in e-learning scenarios related to topics such as safety andsecurity , dealing with computers , programming best practices , re-quirements engineering best practices , and sensitization of citizens .Whereas the former scenarios focus on knowledge transfer directlyto employees of the public administration, the latter one focuses onknowledge transfer between public administration and citizens. Ex-amples thereof are topics such as healthy eating behavior , environ-mental protection , and first aid . Especially K NOWLEDGE C HECK R competitions can be regarded as a question-driven learning chan-nel [18] where the questions (and corresponding answers) are a ma-jor mean to increase the sensitiveness of citizens with regard to thementioned topics. Motivation in this context is not self-intrinsic andmust be stimulated on the basis of remuneration mechanisms such asprizes provided by companies.We have conducted a usability study that focused on public ad-ministration end-users of the K NOWLEDGE C HECK
R environment.The questionnaire was based on the
System Usability Scale (SUS)[1] with N=20 participants providing feedback on the usability of thesystem in the context of the mentioned administration-internal appli-cations. Overall, the participants of the study provided positive feed-back regarding general usability aspects summarized via SUS ques-tionnaire (see Figure 4). Benefits from the application of K
NOWL - EDGE C HECK
R in the public administration are (1) reduced effortsfor managers to keep their team up-to-date and (2) the avoidance ofcost-intensive mistakes. On an average, time savings due to reduced”update efforts” were reported to be around . University Courses.
Experiences from the application of K
NOWL - EDG C HECK
R in the university context provide similar results. Lead-ers of research teams experience similar effort reductions ( ) related to the update of their students. An example thereofare systematic updates regarding basics of their research topics, prac-tices when writing papers, and criteria and strategies for successfullycompleting their PhD studies. In the context of university courses,K
NOWLEDGE C HECK
R has been applied in computer science teach-ing. The related learning applications provide an additional means igure 4.
System Usability Scale (SUS) [1] evaluation in public administration (evaluation scale 0: strongly disagree .. 100: strongly agree ). for students to prepare for courses and check their knowledge levelin different categories. This is an extremely important feature for stu-dents since the system enables them to focus their learning effort onrelevant topics in which they have to catch up.When used as an additional means to understand course top-ics and to prepare for an exam, around 10% of the students useK NOWLEDGE C HECK
R from the very beginning throughout thewhole course. Furthermore, around 80% of the students primarilyuse the system directly ahead of an exam with goal to check theirknowledge level and to be optimally prepared. Finally, on an average20% of the students do not use K
NOWLEDGE C HECK
R at all. Con-sequently, the usage in scenarios where users are not forced to usethe system (contrary to industrial contexts), follows an .Similar percentages have been observed in four different courses.Finally, we measured the prediction quality of the utility-based ap-proach discussed in Section 2, since in most of the university coursesstudents are used to sign-in (non-anonymous mode) to interact withthe system. On an average, the prediction quality of the utility-basedrecommendation approach in terms of precision [7] is around 0.9overall all learning apps, i.e., in nearly 90% of the cases, the systemmanages to predict the item that will also be chosen by the user.In a course with N=350 Computer Science students, we measuredpotential increases in student output quality that can be explainedby the application of K
NOWLEDGE C HECK
R. This evaluation wasperformed in the context of the course
Object-oriented Analysis andDesign where topics and support team did not change over threeyears and a significant improvement could be observed in terms ofstudent grades. Compared to previous years, there was a significantreduction of negative grades (evaluation ”insufficient”) ( . %) anda significant increase of excellent grades (evaluation ”very good”)after K NOWLEDGE C HECK
R has been provided ( . %). The previ-ously discussed could be confirmed. In this scenario, 350users answered around . questions which means around answered questions per user. User Motivation and Explanations.
User motivation is an impor-tant aspect since it is the precondition for a wide-spread applica-tion of the system. First, K
NOWLEDGE C HCK
R provides function-alities that help to inform participants of learning applications incases where new contents have been entered into the system or exist-ing ones have been updated. It is important to know that depend- ing on the learning domain, the formulation of related persuasiveexplanations (arguments) should differ. In high-involvement learn-ing domains (users have a high interest in understanding the learningcontent) such as university courses, persuasive explanations shouldfollow the line of socialness (in the line of the persuasion dimen-sions proposed in Cialdini [3]). An example thereof is the expla-nation other users who answered the following questions correctly,managed to pass the exam in 95% of the cases . Vice-versa, in low-involvement learning domains (users have a low or nearly no interestin understanding the learning content), such as learning fire protec-tion rules , instead of socialness, time-related arguments seem to bemore important. An example of such an argument is the following: the answering of the following six questions takes only three minutes . Recommender Systems.
Recommender systems are used to retrieveitems of relevance for the user from a large and potentially complexitem assortment [9].
Collaborative filtering [4] exploits the prefer-ences of so-called nearest neighbors, i.e., users with preferences sim-ilar to the current user, and recommends items that have already beenconsumed (and rated positively) by the nearest neighbors but not bythe current user.
Utility-based recommendation is based on a util-ity analysis of different items using a utility function [8].
Content-based recommender systems [16] focus on evaluating the similaritybetween a new item a user did not notice up to now and the user pro-file derived from previous item consumptions. Finally, knowledge-based recommender systems [2, 6] focus on the recommendation ofitems characterized by attributes where the recommendation knowl-edge is often described either in terms of constraints or in terms ofsimilarity metrics.
Recommender Systems in e-Learning.
Overall, the application ofrecommender systems in e-learning scenarios primarily focuses onthe personalized provision of learning content – for an overview ofrecommender systems in e-learning we refer to [11, 12]. In K
NOWL - EDGE C HECK
R, collaborative filtering is applied in the context ofproviding (session-based) recommendation services to anonymoususers, utility-based recommendation is used to implement function-alities to counteract forgetting, and content-based recommendationis included to provide basic Q&A services. Thus, the application ofecommender systems is extended by specifically taking into accountrequirements of inverted learning processes [13].
Counteracting Forgetting.
Research focused on the analysis of for-getting processes can primarily be found in the psychological litera-ture [10, 15, 18]. For example, Pashler et al. [15] analyze possibili-ties to enhance learning processes and retarding forgetting and pointout clear improvements that can be achieved when providing relatedtechnologies. In the context of recommender systems, forgetting pro-cesses are primarily taken into account in models that represent(long-term) preference shifts [14]. In this context, knowledge aboutforgetting processes is exploited to infer preference shifts whereasknowledge about forgetting processes in K
NOWLEDGE C HECK
R isused to develop strategies that help to counteract forgetting.
Explanations and Recommender Systems.
The selection of expla-nation types implemented in a recommender system strongly de-pends on the overall goal of the explanation [19]. Examples of suchgoals are increasing the purchase probability of specific items , in-creasing a user’s item domain knowledge , persuading a user to takespecific actions , and increasing the trust level of a user . A majorfocus of explanation approaches in the K NOWLEDGE C HECK
R envi-ronment is (1) to persuade for system usage and (2) to provide ex-planations in situations where a user is not able to answer a questioncorrectly. In both cases, the overall impact of these explanations isthat users increase their domain-specific knowledge. K NOWLEDGE C HECK
R is a learning environment that supportsquestion-enhanced learning processes that enable counteracting for-getting on the basis of recommendation technologies. In this paper,we provided an overview of the algorithmic approaches integratedin K
NOWLEDGE C HECK
R and also discussed example aspects of theuser interface provided by the system. In this context, we also re-ported results from empirical studies conducted on the basis of real-world deployments of K
NOWLEDGE C HECK
R.Our plans for future work include further extensions of K
NOWL - EDGE C HECK
R. First, we plan to integrate automated video segmen-tation functionalities that help to cut sequences from videos that besthelp to explain content/question-specific aspects. Second, we willfurther improve the predictive quality of question recommendations.Third, sentiment learning from chats will be used to estimate moreprecisely different dimensions such as quality and complexity of aquestion. Furthermore, we plan to include services such as the op-timization of the working load of a user to achieve specific goals.For example, we will provide mechanisms that recommend learningitems that have to be ”consumed” to achieve specific learning goalssuch as successfully passing an exam with minimum effort . Finally,we plan to analyze in more detail the impact of repetitions on thepersonal knowledge level evolution.
ACKNOWLEDGEMENTS
The work presented in this paper has been conducted within thescope of the K
NOWLEDGE C HECK
R research project at the GrazUniversity of Technology and various related industry cooperations.
REFERENCES [1] A. Bangor, P. Kortum, and J. Miller, ‘An Empirical Evaluation of theSystem Usability Scale’,
International Journal of Human–ComputerInteraction , (6), 574–594, (2008). [2] R. Burke, ‘Knowledge-based Recommender Systems’, Encyclopedia ofLibrary & Information Systems , (32), 180–200, (2000).[3] R. Cialdini, Influence: The Psychology of Persuasion , Pearson, 2014.[4] M. Ekstrand, J. Riedl, and J. Konstan,
Collaborative Filtering Recom-mender Systems, Foundations and Trends in Human-Computer Interac-tion , now Publishers, 2010.[5] A. Felfernig, L. Boratto, M. Stettinger, and M. Tkalcic,
Group Recom-mender Systems – An Introduction , Springer, 2018.[6] A. Felfernig and R. Burke, ‘Constraint-based Recommender Systems:Technologies and Research Issues’, in
ACM International Conferenceon Electronic Commerce , pp. 17–26, Innsbruck, Austria, (2008).[7] J. Herlocker, J. Konstan, L. Terveen, and J. Riedl, ‘Evaluating Collab-orative Filtering Recommender Systems’,
ACM Transactions on Infor-mation Systems , (1), 5–53, (2004).[8] S. Huang, ‘Designing utility-based recommender systems for e-commerce: Evaluation of preference-elicitation methods’, ElectronicCommerce Research and Applications , (4), 398–407, (2011).[9] D. Jannach, M. Zanker, A. Felfernig, and G. Friedrich, RecommenderSystems – An Introduction , Cambridge University Press, 2010.[10] S. Kang, ‘Spaced repetition promotes efficient and effective learning:Policy implications for instruction’,
Policy Insights from the Behavioraland Brain Sciences , (1), 12–19, (2016).[11] A. Klasnja-Milicevic, M. Ivanovic, and A. Nanopoulos, ‘Recommendersystems in e-learning environments: a survey of the state-of-the-art andpossible extensions’, AI Review , (4), 571–604, (2015).[12] C. Krauss, R. Chandru, A. Merceron, and T. An, ‘You Might Have For-gotten This Learning Content - How the Smart Learning RecommenderPredicts Appropriate Learning Objects’, International Journal on Ad-vances in Intelligent Systems , (3–4), 472–484, (2016).[13] A. Lakmal and P. Dawson, ‘Motivation and cognitive load in the flippedclassroom: definition, rationale and a call for research’, Higher Educa-tion Research & Development , (1), 1–14, (2015).[14] P. Matuszyk, J. Vinagre, M. Spiliopoulou, A. Jorge, and J. Gama,‘Forgetting techniques for stream-based matrix factorization in recom-mender systems’, Knowledge and Inf.Sys. , (2), 275–304, (2018).[15] H. Pashler, D. Rohrer, N. Cepeda, and S. Carpenter, ‘Enhancing learn-ing and retarding forgetting: choices and consequences’, PsychonomicBulletin & Review , (2), 187–193, (2007).[16] M. Pazzani and D. Billsus, ‘Content-based recommendation systems’,in The Adaptive Web , eds., P. Brusilovsky, A. Kobsa, and W. Nejdl,volume 4321 of
LNCS , 325–341, Springer, Berlin, Heidelberg, (2007).[17] M. Quadrana, P. Cremonesi, and D. Jannach, ‘Sequence-Aware Recom-mender Systems’,
ACM Computing Surveys , (66), (2018).[18] H. Roediger and J. Karpicke, ‘Test-enhanced learning: Taking memorytests improves long-term retention’,
Psychological Science , , 249–255, (2006).[19] N. Tintarev and J. Masthoff, ‘Designing and evaluating explanationsfor recommender systems’, in Recommender Systems Handbook , eds.,F. Ricci, L. Rokach, B. Shapira, and P. Kantor, 479–510, Springer,Boston, MA, USA, (2011).[20] M. Wang, P. Ren, L. Mei, Z. Chen, J. Ma, and M. deRijke, ‘A Collabo-rative Session-based Recommendation Approach with Parallel MemoryModules ’, in42nd ACM SIGIR Conference on Research and Develop-ment in Information Retrieval