An Overview of Recommender Systems and Machine Learning in Feature Modeling and Configuration
Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi Ngoc Trang Tran, Müslüum Atas
AAn Overview of Recommender Systems and Machine Learningin Feature Modeling and Configuration
Alexander Felfernig
Institute of Software Technology,Graz University of TechnologyGraz, [email protected]
Viet-Man Le
Institute of Software Technology,Graz University of TechnologyGraz, [email protected]
Andrei Popescu
Institute of Software Technology,Graz University of TechnologyGraz, [email protected]
Mathias Uta
Siemens Gas & PowerErlangen, [email protected]
Thi Ngoc Trang Tran
Institute of Software Technology,Graz University of TechnologyGraz, [email protected]
Müslüm Atas
Institute of Software Technology,Graz University of TechnologyGraz, [email protected]
ABSTRACT
Recommender systems support decisions in various domains rang-ing from simple items such as books and movies to more complexitems such as financial services, telecommunication equipment,and software systems. In this context, recommendations are deter-mined, for example, on the basis of analyzing the preferences ofsimilar users. In contrast to simple items which can be enumeratedin an item catalog, complex items have to be represented on thebasis of variability models (e.g., feature models) since a completeenumeration of all possible configurations is infeasible and wouldtrigger significant performance issues. In this paper, we give anoverview of a potential new line of research which is related tothe application of recommender systems and machine learningtechniques in feature modeling and configuration. In this context,we give examples of the application of recommender systems andmachine learning and discuss future research issues.
ACM Reference Format:
Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi NgocTrang Tran, and Müslüm Atas. 2021. An Overview of Recommender Systemsand Machine Learning in Feature Modeling and Configuration. In
ACM, New York,NY, USA, 8 pages. https://doi.org/10.1145/3442391.3442408
Feature models can be regarded as a central element of feature-oriented software development (FOSD) processes [2]. Feature mod-els can be used to represent variability and commonality propertiesof software artifacts and various other types of products and ser-vices [1, 3, 12, 17]. Applications thereof support users in decidingabout which features should be included in a specific configuration.
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].
VaMoS’21, February 9–11, 2021, Krems, Austria © 2021 Association for Computing Machinery.ACM ISBN 978-1-4503-8824-5/21/02...$15.00https://doi.org/10.1145/3442391.3442408
Feature models and variability models in general can become quitecomplex which makes it challenging to develop these models aswell as to interact with the corresponding decision support systems[7, 23, 26]. In this paper, we give an overview of a potentially newresearch line which is related to the application of recommendersystems and machine learning techniques in feature modeling andconfiguration scenarios.Recommender systems can be defined as any system that guidesa user in a personalized way to interesting or useful objects in a largespace of possible options or that produces such objects as output [10].These systems use basic machine learning techniques (classificationas well as prediction techniques) for being able to identify items ofrelevance for a user. Typical applications of recommender systemsrely on a dataset that serves as an input for learning algorithms.These algorithms infer models that predict item preferences of users.Recommender system applications are manifold and range fromsimple items such as news [18] to more complex items such asfinancial services [11] and software systems [26].In general, recommender systems help to infer user interests onthe basis of preference definition histories, i.e., which items werepreferred by users in the past.
Collaborative filtering recommendersystems [18] are based on the idea of word-of-mouth promotionwhere items regarded as relevant by users with similar preferences(so-called nearest neighbors ) are recommended to the current user.A model-based variant thereof is matrix factorization [19] whichdescribes the relationship between users and items on the basis ofa set of hidden aspects. Content-based filtering [20] is based on theidea of recommending items to the current user which are similar tothose the user has liked in the past.
Knowledge-based recommendersystems [4] are based on explicit recommendation knowledge interms of attributes and constraints or similarity metrics whichdescribe the relationship between a set of customer requirementsand corresponding items. Finally, group recommender systems [9]focus on the recommendation of items to groups of users insteadof single users.The major goal of this paper is to analyze recommendationscenarios in the context of feature model development and con-figuration. Feature models describe variability and commonality In matrix factorization [19], these aspects are also denoted as features . a r X i v : . [ c s . I R ] F e b aMoS’21, February 9–11, 2021, Krems, Austria Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi Ngoc Trang Tran, and Müslüm Atas properties of items. In many cases, feature models are the basisof configurator applications associated with a potentially largeuser base. In application scenarios where user communities areinteracting with configurators (derived from feature models), datacan be collected from user interactions and exploited to predictuser-individual preferences.Summarizing, the contributions of this paper are the following: • We provide an overview of recommendation and machinelearning approaches in feature modeling and configuration.In this context, we focus on the basic scenarios of interactiveconfiguration , reconfiguration , and feature modeling processes . • On the basis of examples, we sketch application scenariosof recommendation technologies that open a new line ofresearch in feature modeling and configuration. • Finally, we discuss open research issues to be solved to fur-ther advance the related state of the art.The remainder of this paper is organized as follows. In Section 2,we introduce an example feature model from the domain of surveysoftware services . In Section 3, we introduce a CSP-based representa-tion of the example feature model. In Section 4, we discuss scenariosin which recommendation and related machine learning techniquescan be applied to support interactive configuration . Section 5 focuseson recommender systems and machine learning in the context of reconfiguration . Finally, Section 6 provides an overview of recom-mendation concepts that can be applied to support feature model knowledge acquisition . Sections 4–6 also include a topic-specific dis-cussion of open research issues. In Section 7, the paper is concludedwith a discussion of further research topics.
Features can be organized in a hierarchical fashion [3] using re-lationships such as mandatory (if a parent feature is included ina configuration, the child feature must be included as well, andvice versa), optional (given the inclusion of a parent feature, theinclusion of the corresponding child feature is optional), alternative (if the parent feature is included, exactly one of the child featureshas to be included), and or (if the parent feature is included, atleast one of the child features has to be included). Furthermore,cross-tree constraints can be used to define relationships betweenfeatures that do not follow the hierarchical structure of the fea-ture model. First, excludes(a,b) constraints prohibit the inclusionof both features ( 𝑎 and 𝑏 ) in the same configuration. Second, re-quires(a,b) constraints necessitate that if feature 𝑎 is included in aconfiguration, feature 𝑏 must be included as well.An example feature model is depicted in Figure 1. In this model, license is used to describe the selected license model where twodifferent models are available: an advanced license allows to includeall features provided in the feature model whereas a basic license restricts the set of selectable features. In Figure 1, license and QA are designed as mandatory features, i.e., must be included in everysurvey software configuration. If a user selects ABtesting to beincluded in the configuration, this also requires the inclusion ofthe statistics feature. The QA feature supports both, basic QA and multimedia QA questions – at least one of these has to be includedin each configuration. Figure 1: An example feature model ( survey software ). To enable reasoning about potential solutions (configurations), afeature model has to be translated into a formal representation. Oneoption for a formal representation of feature models are constraintsatisfaction problems (CSPs) [31]. On the level of a CSP, a featuremodel can be defined as a configuration task (see Definition 3.1).
Definition 3.1.
Configuration Task. A configuration task derivedfrom a feature model can be defined as a constraint satisfactionproblem (CSP) ( 𝑉 , 𝐷, 𝐶 ) where 𝑉 = { 𝑣 , 𝑣 , .., 𝑣 𝑛 } is a set of Booleanvariables, 𝐷 = { 𝑑𝑜𝑚 ( 𝑣 ) , 𝑑𝑜𝑚 ( 𝑣 ) , .., 𝑑𝑜𝑚 ( 𝑣 𝑛 )} is the set of variabledomains, and 𝐶 = 𝐶 𝑅 ∪ 𝐶 𝐹 where 𝐶 𝑅 = { 𝑐 , 𝑐 , .., 𝑐 𝑘 } is a set ofcustomer requirements (i.e., preferred inclusions and exclusionsof features), and 𝐶 𝐹 = { 𝑐 𝑘 + , 𝑐 𝑘 + , .., 𝑐 𝑞 } is a corresponding set ofconstraints derived from the feature model.In the case of basic feature models (without further attributes), 𝑑𝑜𝑚 ( 𝑣 𝑖 ) = { , } . Constraints 𝑐 𝑗 can be directly derived from a fea-ture model and represent (1) structural relationships (e.g., a manda-tory relationship) and (2) cross-tree relationships (e.g., a requires relationship). A set of rules of how to formalize the relationshipsof a feature model in terms of a set of corresponding constraints isdiscussed, for example, in Benavides et al. [3].On the basis of the definition of a configuration task , we nowintroduce the definition of a configuration (Definition 3.2). Definition 3.2.
Configuration. A configuration (solution) for aconfiguration task ( 𝑉 , 𝐷, 𝐶 ) is an assignment 𝐴 of the variables in 𝑉 which fulfils the criteria that all constraints in 𝐶 are consistentwith the variable assignments in 𝐴 .Following Definition 3.1, Table 1 represents the set of CSP vari-ables (i.e., features represented by variables in 𝑉 ) and correspondingBoolean domains ( 𝐷 ) derived from the feature model in Figure 1.In our example, Table 2 shows the constraints 𝐶 𝐹 derived fromthe feature model depicted in Figure 1.In Table 2, 𝑐 represents a so-called root constraint which isused to assure that (irrelevant) empty configurations are avoided.Furthermore, 𝑐 and 𝑐 represent mandatory relationships: both payment and QA are mandatory, i.e., every configuration has toinclude these features. Both of the features ABtesting and statistics are regarded as optional (constraints 𝑐 and 𝑐 ), i.e., they do nothave to be part of every configuration. The features basiclicense n Overview of Recommender Systems and Machine Learning in Feature Modeling and Configuration VaMoS’21, February 9–11, 2021, Krems, Austria Table 1: Features (including abbreviations of feature names)and corresponding domain definitions ( = 𝑡𝑟𝑢𝑒 , = 𝑓 𝑎𝑙𝑠𝑒 ). featurename abbreviation domain 𝑠𝑢𝑟𝑣𝑒𝑦 𝑠𝑢𝑟 { , } 𝑙𝑖𝑐𝑒𝑛𝑠𝑒 𝑙𝑖𝑐 { , } 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒 𝑎𝑑𝑙𝑖𝑐 { , } 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 𝑏𝑎𝑠𝑙𝑖𝑐 { , } 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 𝐴𝐵 { , } 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠 𝑠𝑡𝑎𝑡 { , } 𝑄𝐴 𝑄𝐴 { , } 𝑏𝑎𝑠𝑖𝑐𝑄𝐴 𝑏𝑎𝑠𝑄𝐴 { , } 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴 𝑚𝑚𝑄𝐴 { , } Table 2: Constraints 𝐶 𝐹 = { 𝑐 ..𝑐 } derived from Figure 1. constraint CSP representation 𝑐 𝑠𝑢𝑟𝑣𝑒𝑦 = 𝑐 𝑠𝑢𝑟𝑣𝑒𝑦 ↔ 𝑙𝑖𝑐𝑒𝑛𝑠𝑒𝑐 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 → 𝑠𝑢𝑟𝑣𝑒𝑦𝑐 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠 → 𝑠𝑢𝑟𝑣𝑒𝑦𝑐 𝑠𝑢𝑟𝑣𝑒𝑦 ↔ 𝑄𝐴𝑐 𝑄𝐴 → 𝑏𝑎𝑠𝑖𝑐𝑄𝐴 ∨ 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴𝑐 ( 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ↔ ¬ 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ∧ 𝑙𝑖𝑐𝑒𝑛𝑠𝑒 )∧( 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ↔ ¬ 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ∧ 𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ) 𝑐 ¬( 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 ∧ 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ) 𝑐 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 → 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠𝑐 ¬( 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 ∧ 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴 ) and advancedlicense are regarded as part of an alternative relation-ship ( 𝑐 ). Furthermore, basicQA and multimediaQA are part of anoptional relationship ( 𝑐 ). Finally, the features ABtesting and ba-siclicense are regarded as incompatible ( 𝑐 ), the inclusion of feature ABtesting requires the inclusion of feature statistics ( 𝑐 ), and thefeatures basiclicense and 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴 are incompatible.Assuming the existence of the customer (user) requirement 𝐶 𝑅 = { 𝑐 : 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 = } , we are able to derive the configu-rations 𝐴 𝑖 as depicted in Table 3. All three configurations couldbe regarded as recommendation candidates , however, we are pri-marily interested in solutions which are the most relevant ones fora user. In the following, we assume the existence of two exampleusers , namely 𝑢 𝑎 and 𝑢 𝑏 (both would like to have included the fea-ture 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 ). Following the concepts of utility-based rankingwhich is a major element of knowledge-based recommendation[11], interest dimensions can be regarded as explicit global solu-tion properties. We denote a specific interest dimension-related user preference of user 𝑢 𝑖 with regard to dimension 𝑑 𝑗 as 𝑢𝑝 ( 𝑢 𝑖 , 𝑑 𝑗 ) ,for example, 𝑢𝑝 ( 𝑢 𝑎 , 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦 ) = . , 𝑢𝑝 ( 𝑢 𝑎 , 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 ) = . ) , 𝑢𝑝 ( 𝑢 𝑏 , 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦 ) = . ) , and 𝑢𝑝 ( 𝑢 𝑏 , 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 ) = . ) .The higher the value of an interest dimension (between .. ), thehigher the corresponding interest of the user. For example, if a useris interested in simplicity , he or she will prefer configurations witha lower number of features (lower overhead in understanding theprovided software). In order to be able to rank configurations, wealso need to evaluate the solution (configuration) attributes with Table 3: Example configurations 𝐴 ..𝐴 consistent with 𝐶 = 𝐶 𝐹 ∪ 𝐶 𝑅 ( 𝐶 𝑅 = { 𝑐 : 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 = } ). feature 𝐴 𝐴 𝐴 𝑠𝑢𝑟𝑣𝑒𝑦 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒 𝑏𝑎𝑠𝑖𝑐𝑖𝑐𝑒𝑛𝑠𝑒 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠 𝑏𝑎𝑠𝑖𝑐𝑄𝐴 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴 simplicity and productivity . Inthis context, we omit the features survey , license , and QA which areincluded in every configuration. Table 4: Utility evaluation 𝑢 of features 𝑓 𝑖 with re-gard to interest dimensions 𝑑 𝑗 ( 𝑢 ( 𝑓 𝑖 , 𝑑 𝑗 ) ), for example, 𝑢 ( 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒, 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦 ) = . . features 𝑓 𝑖 simplicity productivity 𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑙𝑖𝑐𝑒𝑛𝑠𝑒 = 𝑏𝑎𝑠𝑖𝑐𝑙𝑖𝑐𝑒𝑛𝑠𝑒 = 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 = 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐𝑠 = 𝑚𝑢𝑙𝑡𝑖𝑚𝑒𝑑𝑖𝑎𝑄𝐴 = 𝑏𝑎𝑠𝑖𝑐𝑄𝐴 = 𝑢 𝑎 and 𝑢 𝑏 , and the informationprovided in Table 4, we are able to calculate the overall utility of theindividual solutions 𝐴 ..𝐴 using Formula 1. The user-individualutilities are shown in Table 5. In our simplified example, alterna-tive 𝐴 is assumed to be the preferred configuration of both users.Different users could have different preferences and could alsoreceive different recommendations that depend on their personalpreferences regarding a set of interest dimensions. 𝑢𝑡𝑖𝑙𝑖𝑡𝑦 ( 𝐴, 𝑢 𝑗 ) = Σ 𝑓 = 𝑡𝑟𝑢𝑒 ∈ 𝐴 Σ 𝑑 ∈ 𝐷𝑖𝑚𝑠 𝑢 ( 𝑓 , 𝑑 ) × 𝑢𝑝 ( 𝑢 𝑗 , 𝑑 ) (1) Table 5: Utilities of example configurations 𝐴 ..𝐴 for users 𝑢 𝑎 and 𝑢 𝑏 . configuration 𝐴 𝐴 𝐴 𝑢𝑡𝑖𝑙𝑖𝑡𝑦 ( 𝐴 𝑖 , 𝑢 𝑎 ) 𝑢𝑡𝑖𝑙𝑖𝑡𝑦 ( 𝐴 𝑖 , 𝑢 𝑏 ) Utility-based ranking (recommendation) can be used if alterna-tive configurations have already been determined [11]. Note thatwe focused on a scenario where utility-based recommendation isused to identify a recommendation of relevance for a single user.However, there are also scenarios where recommendations have tobe determined for groups of users [9]. In such a context, the pref-erences of individual users (group members) are aggregated, forexample, by interpreting a group rating as the average value of the aMoS’21, February 9–11, 2021, Krems, Austria Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi Ngoc Trang Tran, and Müslüm Atas user-individual item evaluations. For details regarding the applica-tion of group recommender systems in configuration contexts werefer to [8, 9].Utility-based ranking has the disadvantage of knowledge acqui-sition efforts that are needed to specify the contributions of userselections to individual interest dimensions. In the following sec-tion, we will discuss scenarios in which recommender systems canbe applied to support users in the selection of individual features,i.e., the configuration process is still ongoing and users need supportin selecting and deselecting individual features.
Assuming the availability of user interaction data from previousconfiguration sessions, we are able to proactively support the cur-rent user interacting with a configurator [7, 23, 26]. A need for sucha functionality is given if a user has limited domain knowledge andis unsure about the inclusion or exclusion of a specific feature or auser simply does not have the time to specify every feature (whichis often the case with large and complex feature models). Table 6depicts a simplified example of such a scenario where users ( 𝑢 ..𝑢 )have already completed their configuration sessions. The 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 user just started his/her session and has specified his/her prefer-ences regarding the features lic ( license ), adlic ( advanced license ),and baslic ( basic license ). Table 6: Recommendation of a feature value ( ) for feature 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 to the current user. user/ lic adlic baslic AB stat QA bas- mm-session QA QA 𝑢 𝑢 𝑢 → collaborative filtering based recommendation [18] isto analyze users with similar preferences compared to the currentuser ( nearest neighbors ) and exploit this knowledge for determiningrecommendations for the current user . For example, the users withthe most similar preferences compared to the current user are 𝑢 and 𝑢 . If we are interested in recommending feature inclusion orexclusion for the feature 𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 ( 𝐴𝐵 ), we could recommend tothe current user to follow his/her nearest neighbors 𝑢 and 𝑢 , i.e.,not to include this feature. In such scenarios, the number of nearestneighbors can be regarded as hyper-parameter which can be tunedduring the learning phase of a recommendation algorithm.A basic example similarity function which helps to figure outthe similarity between user 𝑢 𝑎 and 𝑢 𝑏 is represented by Formula 2.In this context, the set 𝐹 represents those features that have beenspecified, i.e., selected or deselected by both users. 𝑠𝑖𝑚 ( 𝑢 𝑎 , 𝑢 𝑏 ) = |{ 𝑓 ∈ 𝐹 : 𝑓 ( 𝑢 𝑎 ) = 𝑓 ( 𝑢 𝑏 )}||{ 𝑓 ∈ 𝐹 }| (2) For a detailed analysis of different collaborative filtering approaches, we refer toEkstrand et al. [6].
In Table 6, the similarity between the current user and user 𝑢 is . , i.e., both users have completely the same preferences. Inthe mentioned scenario, we have to support session-based recom-mendation [34], since recommendations are determined by usingpreference information from the current user session and the pref-erences of similar users. We want to emphasize that in real-worldscenarios nowadays model-based machine learning approachessuch as matrix factorization [19] are applied for item prediction.An example of applying matrix factorization is provided in Sec-tion 5. These approaches manage to encode complex similarityrelationships into a set of hidden aspects .Up to now, we focused on recommending the selection or dese-lection of individual features . A similar recommendation approachcan also be applied to the selection of the next choice point , i.e.,which feature(s) should be presented next to the user for a selec-tion/deselection decision. In this scenario, we recommend the nextfeature (the next features) a user could be interested in to specify.Note that in such scenarios information gain is often not the bestcriteria for attribute selection since users do not necessarily followthe criteria of information gain when selecting the next feature. Table 7: Recommendation of a possible next feature ( 𝑄𝐴 ) tobe specified by the current user. user/ lic adlic baslic AB stat QA bas- mm-session QA QA 𝑢 𝑢 𝑢 → license ( 𝑙𝑖𝑐 ), advanced license ( 𝑎𝑑𝑙𝑖𝑐 ), and basic license ( 𝑏𝑎𝑠𝑙𝑖𝑐 ). Now, we are interested in whichfeature the user would like to specify next. Formula 3 is a variationof Formula 2 where similarity measurement focuses on the distanceof feature selection orderings of two different users 𝑢 𝑎 and 𝑢 𝑏 . 𝑠𝑖𝑚 ( 𝑢 𝑎 , 𝑢 𝑏 ) = 𝑚 ( 𝑢 𝑎 , 𝑢 𝑏 ) − Σ 𝑓 ∈ 𝐹 | 𝑓 𝑟 ( 𝑢 𝑎 , 𝑓 ) − 𝑓 𝑟 ( 𝑢 𝑏 , 𝑓 )| 𝑚 ( 𝑢 𝑎 , 𝑢 𝑏 ) (3)In Formula 3, 𝐹 denotes features that have already been specifiedby both users. The function 𝑓 𝑟 ( 𝑢, 𝑓 ) denotes the selection position(rank) of feature 𝑓 in the session of user 𝑢 . Furthermore, the func-tion 𝑚 denotes the maximum possible distance between 𝑢 𝑎 and 𝑢 𝑏 in terms of the order of user feature specifications. For example, 𝑚 ( 𝑐𝑢𝑟𝑟𝑒𝑛𝑡, 𝑢 ) is (| − |) + (| − |) + (| − |) = + + = assuming the initial feature selection ordering 1: 𝑙𝑖𝑐 , 2: 𝑎𝑑𝑙𝑖𝑐 , and3: 𝑏𝑎𝑠𝑙𝑖𝑐 . Consequently, 𝑠𝑖𝑚 ( 𝑐𝑢𝑟𝑟𝑒𝑛𝑡, 𝑢 ) = − = . .After having identified the most similar user(s) of the currentuser, the next feature to be specified can be recommended. Thisis the feature with lowest ranking of the similar users that hasnot been specified up to now by the current user. In our example,feature 𝑄𝐴 would be the recommendation candidate since the mostsimilar user of the current user is 𝑢 and the lowest ranking of afeature not specified by the current user is which represents 𝑄𝐴 in the case of 𝑢 . n Overview of Recommender Systems and Machine Learning in Feature Modeling and Configuration VaMoS’21, February 9–11, 2021, Krems, Austria Research Issues . We want to emphasize that the mentionedrecommendation approaches are not able to guarantee that thedetermined recommendation is consistent with the constraints in thefeature model , i.e., it could be the case that a feature setting is rec-ommended which is inconsistent with the underlying feature model.Basically, this means that the recommender system is unaware ofthe constraints defined in the feature model. A possibility to avoidsuch a situation is to trigger an additional consistency check beforethe recommendation is shown to the user. However, this requiresadditional computing time and could in some cases result in slowresponse times. An alternative is to interpret recommendations as search heuristics (e.g., variable value orderings) and to determinerecommendations (configurations) on the solver level. An initialapproach to recommendation-based search is discussed in detail in[24]. A major challenge is to learn configurator search heuristicsin such a way that a reasonable tradeoff between prediction qualityand search effort can be achieved. For related work on search opti-mization in feature model related reasoning we refer, for example,to Sayyad et al. [27].
With reconfiguration [15] we refer to (often interactive) scenarioswhere (1) a set of features has already been specified (selected ordeselected) by a user but triggers an inconsistency or (2) additionalfeatures have been specified in the feature model and we wouldlike to know ahead which user/customer is interested in extendinghis/her current configuration, i.e., including the new feature.An example of the first scenario is depicted in Table 8. The currentuser has changed his/her mind and thinks about choosing a basiclicense ( 𝑏𝑎𝑠𝑙𝑖𝑐 ). At the same time, it seems to be the case that theuser is still interested in having an
ABtesting ( 𝐴𝐵 ) support. In thiscontext, we have to indicate to the user which of his/her preferenceshave to be adapted to restore consistency. Table 8: Log of already completed configurations (conf).The current user has specified inconsistent preferences (if
𝐴𝐵𝑡𝑒𝑠𝑡𝑖𝑛𝑔 is selected, a license has to be paid). user/ lic adlic baslic AB stat QA bas- mm-conf QA QA 𝑢 𝑢 𝑢
0! 1! 1! 𝐶𝑆 , ) [16] be-tween the mentioned user preferences: 𝐶𝑆 : { 𝑎𝑑𝑙𝑖𝑐 = , 𝐴𝐵 = } and 𝐶𝑆 : { 𝑏𝑎𝑠𝑙𝑖𝑐 = , 𝐴𝐵 = } . This inconsistency could be re-solved by pointing out to the user the option of restoring the originalsetting that accepts advanced licensing (alt. : 𝑎𝑑𝑙𝑖𝑐 = , 𝑏𝑎𝑠𝑙𝑖𝑐 = , 𝐴𝐵 = ) or to accept the reduction in functionality in terms of nothaving available anymore ABtesting (alt. : 𝑎𝑑𝑙𝑖𝑐 = , 𝑏𝑎𝑠𝑙𝑖𝑐 = , 𝐴𝐵 = ).If more information is available about the preferences of theuser, for example, we know that a user is more interested in a sim-ple solution ( 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦 = . ) compared to a full feature support ( 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 = . ), we are able to rank the individual alterna-tives of restoring consistency. By reusing the utility scores specifiedin Table 4, we are able to determine a ranking for the individualchange recommendations possible in the mentioned scenario (seeTable 9). Table 9: Utility evaluation of change alternatives (based onTable 4). For example, in the context of 𝑎𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒 and 𝑙𝑖𝑐𝑒𝑛𝑠𝑖𝑛𝑔 , ( . , ) denotes the fact that the inclusion of li-censing contributes . to simplicity and . to productivity. preference advanced- basiclicense ABtesting utilitychange licensealt. 1 ( . , ) ( , . ) ( . , ) ( . , ) ( , . ) ( . , ) 𝑢 𝑎 ),the utility of change 𝑎𝑙𝑡. is ( . ∗ . + . ∗ . ) + ( . ∗ . + . ∗ . ) = . . Furthermore, the overall utility of change 𝑎𝑙𝑡. is . ∗ . + . ∗ . = . . Consequently, we could recommend change 𝑎𝑙𝑡. to the user which clearly focuses on the aspect of 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦 .Determining conflicts and corresponding change alternatives is thetask of conflict detection and corresponding diagnosis algorithms. Adiscussion of these algorithms is beyond the scope of this paper. Forfurther related details regarding conflict detection and diagnosiswe refer to Junker [16], Reiter [25], and Felfernig et al. [14].Reconfiguration is not only associated with consistency manage-ment but can also be relevant when existing configurations shouldbe extended with additional features. In Table 10, an additionalfeature share has been added which supports the sharing of theresults of a survey . An important information in this context (e.g.,in marketing scenarios) is to know which users would be interestedin the new feature and should be primarily contacted. Table 10: Session log of already completed configurations. user lic ad- bas- AB stat QA bas- mm- sharelic lic QA QA 𝑢 𝑢 𝑢 𝑢 relevance prediction for a new feature for a user can be im-plemented with collaborative filtering (CF) [18]. A major differencecompared to the previously discussed CF approaches is that therelevance of new features can be easily determined offline whichmakes the task more appropriate for model-based collaborativefiltering often implemented as matrix factorization (MF) [19]. MFfocuses on optimizing a set of so-called hidden aspects which canthen be used to predict the preferences of individual users.Table 10 ( 𝑇 ) can be reconstructed using dimensionality reduction which is based on the idea of learning two low-dimensional matrices( 𝑈 𝐴 and 𝐴𝐹 ) that help to derive a matrix 𝑇 ′ ~ 𝑇 , i.e., 𝑇 ′ approximates 𝑇 . The advantage of this approach is generalizability and efficiencysince we are able to predict individual preferences on the basis aMoS’21, February 9–11, 2021, Krems, Austria Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi Ngoc Trang Tran, and Müslüm Atas of a simple matrix multiplication operation. Let us first constructthe matrices UA and AF on the basis of the evaluation dimensions productivity and simplicity as depicted in Tables 11 and 12.When applying matrix factorization to the matrices UA and AF ,we can derive the matrix T’ which is an approximation of the orig-inal matrix T . Note that up to now we just manually estimated the relationship between user- and item-specific interest dimen-sions. The disadvantage of this approach is that estimation has tobe performed manually with tedious adaptation efforts for new fea-tures and also the requirement that each user has to specify his/herpreferences regarding the interest dimensions. For the new feature share (see Table 12), we assume a high evaluation with regard to thedimension productivity (additional functionality is provided) anda very high evaluation with regard to simplicity (the share featuredoes not trigger additional complexity).Using matrix factorization, this manual process can be substi-tuted by machine learning that estimates the weights of individualinterest dimensions to optimize the similarity between 𝑇 ′ and 𝑇 [19]. Table 13 represents the result of a matrix multiplication UA • AF. In this context, share has very high estimate ( . on a scale .. ) for user 𝑢 𝑏 . Consequently, the new feature has a high chanceto be of relevance for 𝑢 𝑏 . Table 11: Matrix UA representing user preferences regardinginterest dimensions (aspects). user 𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦𝑢 𝑎 𝑢 𝑏 disadvantage that the learneddimensions (aspects) do not have a clear semantics but are just rep-resenting abstract properties that optimize the prediction quality ofuser interests regarding individual features. When applying matrixfactorization, the interest dimensions of Tables 11 and 12 would besubstituted by two (or more) optimized abstract dimensions [19].Many software systems are configurable in one way or another[21, 22, 29]. In many cases, the configuration space is huge andmechanisms are needed that help to support tasks such as theprediction of the performance of specific (re-)configurations andthe optimization of configurations. Performance prediction can playan important role in the context of (re-)configuring packages andparameters of an operating system. In this context, it should bepossible to predict system performance to avoid slow runtimes dueto low-quality parametrizations.A system should also be able to support parameter optimiza-tion, i.e., to recommend reasonable parameter settings during theramp-up phase of a system or during reconfiguration. Such an opti-mization can be performed following so-called sampling, measuring,and learning patterns [21] with the overall task of identifying rep-resentative system parameter settings, measuring the impact ofa specific configuration, and learn a general model to be able toclassify between high- and low-quality parametrizations (configu-rations).The mentioned goals can be supported a.o. on the basis of ma-trix factorization [19] where samples can be regarded as reference points and factorization can be applied to (1) recommend parame-ter settings that will result in a good system performance and (2)provide hints that some of the parametrizations will lead to lowsystem performance.
Research Issues . Similar to collaborative filtering, recommen-dations determined by matrix factorization cannot guarantee thefeasibility of a recommendation, i.e., there can be situations where arecommendation induces an inconsistency with the constraints de-fined in the feature model. As mentioned, an approach to deal withsuch situations is to recommend solver search heuristics that arelearned from existing user interaction data [24]. An important issuerelated to both scenarios, i.e., configuration and reconfiguration, ishow to explain recommendations . In both scenarios, explanationsare shallow, i.e., only refer to the used algorithm. For example, col-laborative filtering is based on explanations of the preferences ofnearest neighbors. A direction for future research in this context isto combine machine learning with less-well performing knowledge-based recommendation approaches (e.g., the utility-based approachdiscussed in Section 3) and then generate explanations on the basisof the knowledge-based recommendation model. Such knowledge-based models allow for a more fine-grained explanation on thesemantic level. An example research issue is to figure out whichcriteria are sufficient for the application of an inferior (in terms ofprediction quality) knowledge-based approach for the generationof explanations.
Finally, we take a look at the process of feature model develop-ment. In this context, recommendation approaches can be applied,for example, in the context of learning processes . Engineers of fea-ture models who are in charge of overtaking the development andmaintenance of a new feature model, often need support in the nav-igation of the feature and constraint space. A basic idea to supportsuch learning processes is to apply collaborative filtering whichcan help to recommend items (e.g., constraints) that could be ofrelevance for the knowledge engineer at a specific point of time.The recommendation approach that can be applied in this context isquite similar to the one discussed in the context of recommendingfeatures to be specified within the scope of configuration sessions.In the example depicted in Table 15, the current knowledge engineer( 𝑘𝑒 ) interacting with the feature model has already visited/editedthe constraints 𝑐 ..𝑐 . Collaborative recommendation can be ap-plied to predict further relevant constraints he/she could take alook at. In our example, this would be the constraint 𝑐 .Recommending the next constraint is relevant to support knowl-edge engineers in understanding a knowledge base. A related aspectis the grouping of constraints in such a way that the cognitive over-load of knowledge engineers can be minimized. One way to achievethis is to apply the concepts of clustering which helps to iden-tify constraints which belong together in one way or another. Ananalysis of different approaches to group constraints in configura-tion knowledge acquisition contexts can be found, for example, inFelfernig et al. [13].Besides supporting users in understanding a knowledge base(feature model), recommender systems can also be applied to au-tomatically generate a knowledge base. Ulz et al. [32] introduce a n Overview of Recommender Systems and Machine Learning in Feature Modeling and Configuration VaMoS’21, February 9–11, 2021, Krems, Austria Table 12: Matrix AF representing item property (feature) relationships to interest dimensions (aspects). dimension 𝑎𝑑𝑙𝑖𝑐 𝑏𝑎𝑠𝑙𝑖𝑐 𝐴𝐵 𝑠𝑡𝑎𝑡 𝑚𝑚𝑄𝐴 𝑏𝑎𝑠𝑄𝐴 𝑠ℎ𝑎𝑟𝑒𝑝𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 𝑠𝑖𝑚𝑝𝑙𝑖𝑐𝑖𝑡𝑦
Table 13: Matrix 𝑇 ′ resulting from a matrix multiplicationUA • AF. Feature share appears to be potentially relevantfor 𝑢 𝑏 . user 𝑎𝑑𝑙𝑖𝑐 𝑏𝑎𝑠𝑙𝑖𝑐 𝐴𝐵 𝑠𝑡𝑎𝑡 𝑚𝑚𝑄𝐴 𝑏𝑎𝑠𝑄𝐴 𝑠ℎ𝑎𝑟𝑒𝑢 𝑎 𝑢 𝑏 𝑇 ′ resulting from a matrix multiplicationUA • AF. Feature share appears to be potentially relevantfor 𝑢 𝑏 . user 𝑎𝑑𝑙𝑖𝑐 𝑏𝑎𝑠𝑙𝑖𝑐 𝐴𝐵 𝑠𝑡𝑎𝑡 𝑚𝑚𝑄𝐴 𝑏𝑎𝑠𝑄𝐴 𝑠ℎ𝑎𝑟𝑒𝑢 𝑎 𝑢 𝑏 session/ke 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 𝑐 → human computation [33] based approach where the development ofrecommender and configuration knowledge bases is implementedon the basis of asking users simple questions and aggregate theresults in an intelligent fashion into a corresponding set of con-straints. Questions refer to selection scenarios, for example, usersshould give feedback, which items they would like to be included ina recommendation in a specific context. The output of the approachis a set of requires constraints that support item selection in differ-ent contexts. Bécan et al. [5] and She et al. [28] follow the similaridea of generating feature models from configuration instances, i.e.,instead of asking the user questions about intended item properties,the intended properties are already represented as configurationsor logical formulae. Research Issues . Feature model development should be accom-panied by structured testing approaches which support qualityassurance of feature models, i.e., to assure that the structures andconstraints defined in the feature model are consistent with theunderlying domain knowledge. For example, it should not be possi-ble to calculate configurations that (1) include features that shouldnot be combined with each other and (2) exclude features that areregarded as feasible in the underlying application domain. The underlying idea of human computation is that humans take over problem solvingtasks which computers are not able to solve in equal quality.
Quality assurance for feature models is already supported bydifferent types of analysis operations (see, e.g., [3]). Open issuesin this context are the recommendation of relevant test cases usefulfor identifying erroneous constraints in a feature model and therecommendation of corresponding diagnoses that indicate minimalsubsets of constraints which could be responsible for the faulty be-havior of the feature model. Such a recommendation support wouldhelp to further improve the efficiency of feature model knowledgeacquisition and maintenance processes.
With this paper, we provide an overview of feature modeling andconfiguration scenarios that can profit from the application of rec-ommendation and related machine learning approaches. For se-lected scenarios, we have provided examples that help to improvethe understanding of the discussed application. Future work in thiscontext includes a couple of empirical evaluations that will help tobetter estimate which recommendation approach best supports aspecific scenario. There are a couple of further scenarios that couldprofit from the application of recommendation technologies.For example, the analysis of user interaction data could indi-cate additional relevant constraints to be included in a model . Theseconstraints can also be regarded as specializations of an existingknowledge base [30]. This functionality could be based on associa-tion rule mining applied to the interaction data. Such an approachfollows the line of research of learning whole feature models frompre-existing configurations (see, for example, Bécan et al. [5]).In knowledge acquisition scenarios, an intelligent grouping offeatures and constraints of a feature model could be important tostreamline maintenance processes. Supporting such a grouping can,for example, be based on clustering approaches as discussed inFelfernig et al. [13]. A major issue for future work in this contextis to make constraint groups flexible and adaptive to different sce-narios, for example, searching for a faulty constraint or changing aspecific variability property in a feature model.A related challenge is how to recommend maintenance actionsto avoid low-quality modeling in terms of complex hierarchicalstructures and constraints of low understandability. Such recom-mendations can also be based on a more in-depth knowledge ofcognitive aspects in knowledge representation and maintenance.Finally, an open issue in the context of applying recommendationtechnologies in complex item domains is how to evaluate the qual-ity of recommendations. Existing evaluation measures have to beadapted or extended. An example is the measurement of precision :in the configuration context, single features or groups of featuresettings could be recommended. aMoS’21, February 9–11, 2021, Krems, Austria Alexander Felfernig, Viet-Man Le, Andrei Popescu, Mathias Uta, Thi Ngoc Trang Tran, and Müslüm Atas
ACKNOWLEDGMENTS
The work presented in this paper has been developed within theresearch project ParXCel (
Machine Learning and Parallelization forScalable Constraint Solving ) which is funded by the Austrian Re-search Promotion Agency (FFG) under the project number .We want to thank the following persons for their valuable com-ments which helped to further improve the quality of this paper:David Benavides (University of Seville), Mayte Gómez (Universityof Seville), and Klaus Pilsl (Combeenation).
REFERENCES [1] M. Acher, P. Temple, J-M. Jézéquel, J. Galindo, J. Martinez, and T. Tiadi. 2018.VaryLaTeX: Learning Paper Variants That Meet Constraints. In . Madrid, Spain,83–88.[2] S. Apel and C. Kästner. 2009. An Overview of Feature-Oriented Software Devel-opment.
Journal of Object Technology
8, 5 (2009), 49–84.[3] D. Benavides, S. Segura, and A. Ruiz-Cortes. 2010. Automated analysis of featuremodels 20 years later: A literature review.
Information Systems
35 (2010), 615–636.Issue 6.[4] R. Burke. 2000. Knowledge-based recommender systems.
Encyclopedia of Libraryand Information Systems
69, 32 (2000), 180–200.[5] G. Bécan, R. Behjati, A. Gotlieb, and M. Acher. 2015. Synthesis of attributedfeature models from product descriptions. In . Nashville, TN, USA, 1–10.[6] M. Ekstrand, J. Riedl, and J. Konstan. 2011.
Collaborative Filtering RecommenderSystems . Foundations and Trends in Human–Computer Interaction, Vol. 4.[7] A. Falkner, A. Felfernig, and A. Haag. 2011. Recommendation Technologies forConfigurable Products.
AI Magazine
32, 3 (2011), 99–108.[8] A. Felfernig, M. Atas, T. Tran, and M. Stettinger. 2016. Towards Group-basedConfiguration. In
Workshop on Configuration (ConfWS’16) . Toulouse, France,69–72.[9] A. Felfernig, L. Boratto, M. Stettinger, and M. Tkalcic. 2018.
Group RecommenderSystems . Springer.[10] A. Felfernig and R. Burke. 2008. Constraint-based Recommender Systems: Tech-nologies and Research Issues. In
ACM International Conference on ElectronicCommerce . Innsbruck, Austria, 17–26.[11] A. Felfernig, G. Friedrich, D. Jannach, and M. Zanker. 2006. An Integrated Envi-ronment for the Development of Knowledge-based Recommender Applications.
Intl. Journal of Electronic Commerce
11, 2 (2006), 11–34.[12] A. Felfernig, L. Hotz, C. Bagley, and J. Tiihonen. 2014.
Knowledge-based Configu-ration - From Research to Business Cases . Elsevier.[13] A. Felfernig, S. Reiterer, M. Stettinger, F. Reinfrank, M. Jeran, and G. Ninaus. 2013.Recommender Systems for Configuration Knowledge Engineering. In
Workshopon Configuration (ConfWS’13) . Vienna, Austria, 51–54.[14] A. Felfernig, M. Schubert, and C. Zehentner. 2012. An Efficient Diagnosis Algo-rithm for Inconsistent Constraint Sets.
AI for Engineering Design, Analysis, andManufacturing (AIEDAM)
26, 1 (2012), 53–62.[15] M. Janota, G. Botterweck, and J. Marques-Silva. 2014. On lazy and eager interac-tive reconfiguration. In . ACM, Sophia Antipolis, France, 8:1–8:8.[16] U. Junker. 2004. QuickXPlain: preferred explanations and relaxations for over-constrained problems. In . AAAI, 167–172.[17] K. Kang, S. Cohen, J. Hess, W. Novak, and S. Peterson. 1990. Feature-orientedDomain Analysis (FODA) – Feasibility Study.
TechnicalReport CMU – SEI-90-TR-21 (1990).[18] J. Konstan, B. Miller, J. Herlocker, L. Gordon, and J. Riedl. 1997. GroupLens:Applying Collaborative Filtering to Usenet News.
Commun. ACM
40, 3 (1997),77–87.[19] Y. Koren, R. Bell, and C. Volinsky. 2009. Matrix factorization techniques forrecommender systems.
IEEE Computer
42 (2009), 30–37. Issue 8.[20] M. Pazzani and D. Billsus. 1997. Learning and revising user profiles: The identifi-cation of interesting web sites.
Machine Learning
27 (1997), 313–331.[21] J. Pereira, H. Martin, M. Acher, J-M. Jézéquel, G. Botterweck, and A. Ventresque.2019. Learning Software Configuration Spaces: A Systematic Literature Review.
CoRR abs/1906.03018.[22] J. Pereira, H. Martin, P. Temple, and M. Acher. 2020. Machine Learning andConfigurable Systems: A Gentle Introduction. In . ACM, Vol. A.[23] J. Pereira, P. Matuszyk, S. Krieter, M. Spiliopoulou, and G. Saake. 2018. Personal-ized Recommender Systems for Product-Line Configuration Processes.
ComputerLanguages, Systems & Structures
54 (2018), 451–471. [24] S. Polat-Erdeniz, A. Felfernig, R. Samer, and M. Atas. 2019. Matrix Factorizationbased Heuristics for Constraint-based Recommenders. In
ACM Symposium onApplied Computing (ACM SAC) . ACM, Limassol, Cyprus, 1655–1662.[25] R. Reiter. 1987. A theory of diagnosis from first principles.
Artificial Intelligence
32, 1 (1987), 57–95.[26] J. Rodas-Silva, J. Galindo, J. Garcia-Gutierrez, and D. Benavides. 2019. Selectionof Software Product Line Implementation Components Using RecommenderSystems: An Application to Wordpress.
IEEE Access . Silicon Valley, CA, USA,465–474.[28] S. She, U. Ryssel, N. Andersen, A. Wasowski, and C. Czarnecki. 2014. Efficientsynthesis of feature models.
Information and Software Technology
56, 9 (2014),1122–1143.[29] P. Temple, M. Acher, J-M. Jézéquel, and O. Barais. 2017. Learning Contextual-Variability Models.
IEEE Software
34, 6 (2017), 64–70.[30] P. Temple, J. Galindo, M. Acher, and J-M. Jézéquel. 2016. Using machine learningto infer constraints for product lines. In . ACM, Beijing, China, 209–218.[31] E. Tsang. 1993.
Foundations of Constraint Satisfaction . Academic Press, London.[32] T. Ulz, M. Schwarz, A. Felfernig, S. Haas, A. Shehadeh, S. Reiterer, and M. Stet-tinger. 2017. Human Computation for Constraint-based Recommenders.
Journalof Intelligent Information Systems
49 (2017), 37–57.[33] L. von Ahn. 2005.
Human Computation . Technical Report CM-CS-05-193.[34] S. Wang, L. Cao, and Y. Wang. 2019. A Survey on Session-based RecommenderSystems. arXivarXiv