Designing Explanations for Group Recommender Systems
DDesigning Explanations forGroup Recommender Systems
Alexander Felfernig
Institute of Software TechnologyGraz University of Technology
Graz, [email protected]
Nava Tintarev
Web Information SystemsTU Delft
Delft, the [email protected]
Thi Ngoc Trang Tran
Institute of Software TechnologyGraz University of Technology
Graz, [email protected]
Martin Stettinger
Institute of Software TechnologyGraz University of Technology
Graz, [email protected]
Abstract —Explanations are used in recommender systems forvarious reasons. Users have to be supported in making (high-quality) decisions more quickly. Developers of recommendersystems want to convince users to purchase specific items.Users should better understand how the recommender systemworks and why a specific item has been recommended. Usersshould also develop a more in-depth understanding of the itemdomain. Consequently, explanations are designed in order toachieve specific goals such as increasing the transparency of arecommendation or increasing a user’s trust in the recommendersystem. In this paper, we provide an overview of existingresearch related to explanations in recommender systems, andspecifically discuss aspects relevant to group recommendationscenarios. In this context, we present different ways of explainingand visualizing recommendations determined on the basis ofpreference aggregation strategies.
Index Terms —Recommender Systems, Group RecommenderSystems, Explanations.
Preprint, cite as : A. Felfernig, N. Tintarev, T.N.T. Trang,and M. Stettinger.
Explanations for Groups . In A. Felfernig,L. Boratto, M. Stettinger, and M. Tkalcic (Eds.), Group Rec-ommender Systems: An Introduction (pp. 105-126). Springer-Briefs in Electrical and Computer Engineering. Springer, 2018.I. I
NTRODUCTION
Explanations have been recognized as an important meansto help users to evaluate recommendations, and make betterdecisions, but also to deliver persuasive messages to theuser [1], [2]. Empirical studies show that users appreciateexplanations of recommendations [1], [3]. Explanations canbe regarded as a means to make something clear by giving adetailed description [4]. In the recommender systems context,Friedrich and Zanker [5] define explanations as informationabout recommendations and as means to support objectivesdefined by the designer of a recommender system . Expla-nations can be seen from two basic viewpoints [6], [7]:(1) the user’s ( group member’s ) and (2) the recommenderprovider’s point of view. Users of recommender systems arein the need of additional information to be able to develop abetter understanding of the recommended items.
Developers ofrecommender systems want to provide additional informationto users for various reasons, for example, to convince theuser to purchase an item, to increase a user’s item domainknowledge (educational aspect), and to increase a user’s trust in and overall satisfaction with the recommender system.Another objective is to make users more tolerant with regard torecommendations provided by the system. This is especiallyimportant for new users/items, otherwise a recommendationmay be perceived as inappropriate. Solely providing the corefunctionality of recommender systems, i.e., showing a listof relevant items to users, could evoke the impression ofinteracting with a black box with no transparency and noadditional user-relevant information [1], [2]. Consequently,explanations are an important means to provide informationrelated to recommendations, the recommendation process, andfurther objectives defined by the designer of a recommendersystem [5], [8]–[11]. Visualizations of explanations can furtherimprove the perceived quality of a recommender system [7],[11], [12] – where appropriate, examples of visualizations willbe provided.
Explanations in Single User Recommender Systems:
Insingle user recommender systems, various efforts have al-ready been undertaken to categorize explanations with regardto information sources used to generate explanations andcorresponding goals of explanations [2], [5], [13]–[16]. Acategorization of different information sources that can beused for the explanation of recommendations is given, forexample, in Friedrich and Zanker [5] where recommendeditems , alternative items , and the user model are mentionedas three orthogonal information categories . Potential goalsof explanations are discussed a.o. in Tintarev and Masthoff[2] and Jameson et al. [17]. Examples thereof are efficiency (reducing the time needed to complete a choice task), per-suasiveness (exploiting explanations to change a user’s choicebehavior) [18], effectiveness (proactively helping the user tomake higher-quality decisions), transparency (reasons as towhy an item has been recommended, i.e., answering why-questions), trust (supporting a user in increasing her confi-dence in the recommender), scrutability (providing ways tomake the user profile manageable), satisfaction (explanationsfocusing on aspects such as enjoyment and usability), and credibility (assessed likelihood that a recommendation is ac-curate). Bilgic and Mooney [6] offer a differentiation betweenexplanations that focus on (1) promotion , i.e., convincing usersto adopt recommendations, and (2) satisfaction , i.e., to helpusers make more accurate decisions. a r X i v : . [ c s . I R ] F e b xamples of verbal explanations for single user recommen-dations include phrases such as (1) ’ users who purchased item x also purchased item y ’, (2) ’ since you liked the book x ,we recommend book y from the same authors ’, (3) ’ sinceyou prefer taking sports photos, we recommend camera y because it supports 10 pics/sec in full-frame resolution ’, and(4) ’ item y would be a good choice since it is similar to thealready presented item x and has the requested higher framerate (pics/sec) ’. These example explanations are formulatedbased on information collected and provided by the underlyingrecommendation approaches, i.e., (1) collaborative filtering,(2) content-based filtering, (3) constraint-based recommen-dation, and (4) critiquing-based recommendation – see, forexample, [1], [13], [19], [20]. These examples of explanationscan be regarded as ’basic’, since further information couldbe included. For instance, information related to competitoritems and previous user purchases: ’ since you prefer takingsports photos, we recommend camera y because it supports10 pics/sec in full-frame resolution. z would have been theother option but we propose y since you preferred purchasingfrom provider k in the past and y is only a little bit moreexpensive than its competitors ’.Another type of explanation is the following: ’ no solutioncould be found – if you increase the maximum acceptableprice or decrease the minimum acceptable resolution, acorresponding solution can be identified. ’ This explanationfocuses on indicating options to find a way out of the ’nosolution could be found’ dilemma which primarily occursin the context of constraint-based recommendation scenarios[21]. Another example is ’ item y outperforms item z in both,quality and price, whereas x outperforms z only in quality ’.This explanation does not focus on one item but supportsthe comparison of different candidate items (in this case, x and y ). Importantly, it is directly related to the concept of asymmetric dominance ( y outperforms z two times whereas x does this only once) which is a decoy effect [22]. Explanationsbased on item comparisons are mostly supported in critiquing-based [19] and constraint-based recommendation [23] whichare both based on semantic recommendation knowledge. Incritiquing-based recommendation, compound critiques pointout the relationship between the current reference item andthe corresponding candidate items [24]. An example of acompound critique in the domain of digital cameras is thefollowing: on the basis of the current reference item x , youcan take a look at cameras with a [lower price] and a [higherresolution] or at cameras with a [higher price] and a [higheroptical zoom] . An analysis of comparison interfaces in singleuser constraint-based recommendation is presented in [23],[25]. Explanations in Group Recommender Systems:
The afore-mentioned explanation approaches focus on single users, andso, do not have to consider certain aspects of group decisionmaking. Explanations for groups can have further goals suchas fairness (taking into account, as far as possible, the pref-erences of all group members), consensus (group membersagree on the decision), and optimality (a group makes an optimal or nearly-optimal decision ). An important aspect inthis context is that explanations show how the interests ofindividual group members are taken into account. This is notrelevant in the context of single user recommender systems.Understanding the underlying process enables group membersto evaluate the appropriateness of the way their preferenceshave to been taken into account by the group recommendersystem. Similar to explanations for single users, explanationsfor groups are shaped by the underlying recommendationalgorithms. Explanations similar to those already mentionedcan also be defined in a group context. For example, (1)’ groups that like item x also like item y ’, (2) ’ since the grouplikes the film x , we also recommend film y from the samedirector ’, (3) ’ since the maximum camera price accepted bygroup members is (defined by Paul) and the minimumaccepted resolution is
18 mpix (defined by Joe), we recommend y which supports
20 mpix at a price of . ’, and (4) ’ item x isa good choice since it supports a higher frame rate requestedby all group members and is only a little bit more expensive ’.These examples show that the chosen preference aggrega-tion approach has an impact on the explanation style. While aggregated predictions [27] include information about theindividual preferences of group members (e.g., one groupmember specified the lowest maximum price of ) andthus support explanation goals such as fairness and consensus , aggregated models -based approaches [27] restrict explanationsto the group level (e.g., groups that like x also like y ). Moreadvanced (hybrid) explanations [28] can also be formulatedin group recommendation scenarios, for example, ’ since allgroup members prefer sports photography, we recommendcamera y rather than camera z . It is only a little bit moreexpensive but has a higher usability which is important forgroup member Joe who is a newbie in digital photography.Similar groups also preferred y ’.An example of an explanation in a situation where nosolution could be found is: ’ no
23 mpix camera with a price below could be found. Therefore we recommendcamera y with
20 mpix and a price of since price isthe most important criterion for all group members. ’ Finally,the following example shows how to take into account agroup’s social reality, for example, in terms of ’tactful’ ex-planations [10]: ’ Although your preference for item y is notvery high, your close friend Peter thinks it is an excellentchoice ’. This example explanation is formulated on the levelof aggregated predictions and also takes into account socialrelationships among group members (e.g., neighborhoods ina social network). On the level of aggregated models , anexplanation can be formulated as follows: ’ A majority thinksthat it is a good choice. Some group members think that it isan excellent choice. ’ (assuming the existence of at least someaggregated categorization of preferences such as number oflikes ). Taking into account the individual preferences of groupmembers helps to increase mutual awareness among group In contrast to single-user decision making, the exchange of decision-relevant knowledge among group members has to be fostered [26]. embers, and thus counteracts the natural tendency to focus onone’s own favorite alternatives [29]. An approach to explainingthe consequences of a given recommendation is introducedby Jameson et al. [30], where emotions of individual groupmembers with regard to a recommendation are visualized interms of animated characters.We want to emphasize that explanations for groups isa highly relevant research topic with a limited, but nev-ertheless direction-giving, number of research results [29],[31]–[34]. In the following, we sketch ways in which ex-planations for single-user recommendation scenarios can beadapted to groups. Following the idea of categorizing expla-nation types along the different recommendation approaches[4], [35], we discuss explanations for groups in the con-text of collaborative- and content-based filtering , as well as constraint- and critiquing-based recommendation .II. C
OLLABORATIVE F ILTERING
A widely used example of explanations in collaborativefiltering recommenders is ’ users who purchased item x alsopurchased item y ’. Such explanations can be generated, forexample, on the basis of association rule mining which isoften used as a model-based collaborative filtering approach[36]. Herlocker et al. [1] analyzed the role of explanationsin collaborative filtering recommenders. They focused on theimpact of different explanation styles on user acceptance ofrecommender systems. Explanations were mostly representedgraphically. For example, a histogram of neighbors’ ratingsfor the recommended item categorized ratings as ’good’,’neutral’, or ’bad’. The outcome of their study was that ratinghistograms are the most compelling way to explain ratingdata. Furthermore, simple graphs were perceived as morecompelling than more detailed explanations, i.e., simplicity ofexplanations is a key factor .An orthogonal approach to propose explanations forcollaborative-filtering-based recommendations is presented byChang et al. [37]. Following the idea of generating recom-mendations based on knowledge from the crowd (see, e.g.,[38]), the authors introduce the idea of asking crowd workersto provide feedback on explanations. Quality assurance is anissue but crowd-sourced explanations were considered high-quality. The authors mention longer explanation texts and an increased number of references to item genres as examplesof indicators of high-quality explanations. An example of aquestion for crowd-sourcing in group recommendation sce-narios is the following: ’ given this movie recommendation (e.g., Guardians of the Galaxy ), which of the following areuseful explanations for a group of middle-aged persons? Canbe viewed by the whole family ; Includes plenty of songs fromthe 70ies ; Best movie we have ever seen ’. This way, crowdknowledge can be exploited to better figure out which kindsof explanations are useful in which context and which onesmight be particularly well-received by specific groups (in thiscase, middle-aged persons). A similar approach can be used tofigure out relevant explanations in other recommendation ap-proaches, i.e., which tags to use for an explanation ? (content- based filtering), which requirements to relax ? (constraint-basedrecommendation), and which critiques to propose to the user ?(critiquing-based recommendation).As mentioned by Bilgic and Mooney [6], a goal of theexplanations introduced in Herlocker et al. [1] is to promoteitems but not to provide more insights as to why the items havebeen recommended, i.e., not to provide satisfaction-orientedexplanations that might help users to make more accuratedecisions. There are different ways to move the explanationfocus towards more informative explanations. As proposed in[6] (for single user recommenders), a collaborative-filtering-based explanation can be extended by providing informationon items that had a major influence on the determination ofthe proposed recommendation. Removing the most influentialitems (already rated by group members) from the set ofrated items triggers the most significant difference in termsof recommended item ratings. Similar approaches can be usedto determine the most influencing items in other recommendertypes [6], [39].
Collaborative Filtering Explanations for Groups:
An exam-ple of basic explanations in group-based collaborative filteringis included in P
OLY L ENS , where the predicted rating for eachgroup member and for the group as a whole is shown [40].Some simple examples of how to provide explanations in thecontext of group-based collaborative filtering scenarios areprovided in Tables I and II. Both examples represent variantsof the explanation approaches introduced by Herlocker et al.[1]. Table I depicts an example of an explanation that isbased on the preferences (ratings) of the nearest neighbors(
N N = (cid:83) { n ij } ) of the group members u i (for simplicity,we assume the availability of a complete set of rating data).For each recommended item t i , the corresponding frequencydistribution of the ratings of the nearest neighbors of individualgroup members is shown. Note that N N can represent userswho are in the intersection of users who rated this item( { n , n , ... } ∩ ... ∩ { n m , n mk , ... } ). Alternatively, N N can represent the users in the union of nearest neighbors( { n , n , ... } ∪ ... ∪ { n m , n mk , ... } ). A related explanationcan be ’ users similar to members of this group rated item t as follows ’.Table II depicts an example of an explanation that is basedon the preferences of neighborhood groups gp j of the currentgroup gp . We assume that ratings are only available in anaggregated fashion (ratings of individual users are not avail-able, e.g., for privacy reasons). In this context, the frequencydistribution of the ratings of the nearest neighbor groupsis shown for each item t i . An explanation can contain thefollowing text: ’ groups similar to the current group rated item t as follows ’.In the given examples, explanations refer to ratings but donot take into account aggregation functions. Ntoutsi et al.[34] present an approach to explain the aggregation func-tions in aggregated-prediction-based collaborative filtering.For example, the application of Least Misery (LMS) triggersexplanations of type ’ item y has a group score of 2.9 due tothe (lowest) rating determined for user a ’. A more ’group- ec.item t i ratings of nearest neighbors n ij ∈ NN explanation u u u bad[ − ] neutral[ > − . ] good[ > . − ] nn nn nn nn nn nn t t t t t OLLABORATIVE FILTERING EXPLANATIONS FOR aggregated predictions , I . E ., EXPLANATIONS BASED ON INFORMATION ABOUT THE PREFERENCES ( RATINGS ) OF NEAREST NEIGHBORS ( n ij ) OF INDIVIDUAL GROUP MEMBERS u i .rec.item ratings of NN groups ( gp j ) explanation gp gp gp gp bad[ − ] neutral[ > − . ] good[ > . − ] t t t t t OLLABORATIVE FILTERING EXPLANATIONS FOR aggregated models , I . E ., EXPLANATIONS ARE BASED ON THE AGGREGATED PREFERENCES OFINDIVIDUAL GROUP MEMBERS . oriented’ explanation is ’ item y is recommended because itavoids misery within the group ’. When using Most Pleasure (MPL), the corresponding explanation would be ’ item y hasa group score of 4.8 due to the (highest) rating determinedfor user b ’. Finally, when using Average (AVG), explanationsof type ’ item y is most similar to the ratings of users a, b ,and c ’ are provided. Similar explanations can be generatedfor content-, constraint-, and critiquing-based recommenda-tions. Although initial approaches have already been proposed,different ways to explain group recommendations dependingon the used aggregation function(s) are an issue for futureresearch. Visualization of Collaborative Filtering Explanations forGroups:
There are different ways to visualize a recommenda-tion determined using collaborative filtering [1]. The frequencydistributions introduced and evaluated by Herlocker et al. [1]can also be applied in the context of group recommendationscenarios. An example thereof is given in Figure 1, where theexplanation information contained in Table I is representedgraphically. Figure 2 depicts a similar example where anitem-specific evaluation of nearest (most similar) groups isshown in terms of a frequency distribution. Alternatively, spider diagrams can be applied to visualize the preferences ofnearest neighbors. An example is depicted in Figure 3. Thistype of representation is based on the idea of consensus-basedapproaches to visualize the current status of a group decisionprocess [41], [42].III. C
ONTENT - BASED F ILTERING
The basis for determining recommendations in content-based filtering is the similarity between item descriptions
Fig. 1. Graphical representation of the explanation data contained in Table I.Fig. 2. Graphical representation of the explanation data contained in TableII. and keywords (categories) stored in a user profile. Since theimportance of keywords can differ among group members, ig. 3. Spider diagram for explaining aggregated models based collaborativefiltering recommendations: ratings of nearest neighbor groups gp , .., gp of gp for the recommended item t . This representation is a variant of consensus-based interfaces discussed in [41]. it is important to identify those which are relevant for allgroup members [43]. Explanations are based on the analysisof item-related content. Examples of verbal explanations incontent-based filtering are given in [6]. The authors show thatkeyword-style explanations can increase both the perceivedtrustworthiness and the transparency of recommendations.Such explanations primarily represent occurrence statistics ofkeywords in item descriptions (see also [3]). Gedikli et al.[13] compare different approaches to representing explanationsin content-based filtering scenarios, and show that tag-cloud-based graphical representations outperform verbal approaches. Content-based Filtering Explanations for Groups:
A simpleexample of content-based filtering explanations for groups isdepicted in Table III.Item categories cat j have a user-specific weight (derived,for example, from the category weights of individual userprofiles where user u i is a member of group G ). To determinethe explanation relevance ( er ) of individual categories, theseweights are combined with item-individual weights ( iw ) (seeFormula 1). er ( cat j , t k ) = Σ u i ∈ G userweight ( u i , cat j ) × iw ( t k , cat j ) | G | (1)The higher the explanation-relevance of a category, thehigher the category will be ranked in a list shown to thegroup (members). A verbal explanation related to item t (Table III) can be of the form ’ item t is recommended sinceeach group member is interested in category cat ’. If thepreference information of individual group members is notavailable (e.g., for privacy reasons), this explanation wouldbe formulated as ’ item t is recommended since the group asa whole is interested in category cat ’. Also, more than onecategory can be used in such an explanation. As mentioned,category- or keyword-based explanations can also be extendedwith information about the most influential items [6]. This can be achieved by determining those items that trigger the mostsignificant change in item rating predictions (if not taken intoaccount by the recommendation algorithm).An approach to explaining recommendations on the ba-sis of tags is presented in Vig et al. [35]. Tagsplanations (explanations based on user community tags) are introducedto explain recommendations. In this context, tag relevance is defined as the Pearson Correlation between item ratingsand corresponding tag preference values.
Tag preference isthe relationship between the number of times a specific taghas been applied to an item compared to the total numberof tags applied to the item (weighted with correspondingitem ratings). In a study with M
OVIE L ENS [44] users, theauthors show that both tag relevance and tag preference help toachieve the explanation goals of justification (why has an itembeen recommended) and effectiveness (better decisions aremade). Similar to the example shown in Table III, explanation-relevance (in this case tag relevance) is used to order a list ofexplanatory tags [35].An opinion mining approach to generating explanations isintroduced by Muhammad et al. [45]. In the context of opinionmining, features are extracted from item reviews [46] and thenassociated with corresponding sentiment scores. Features andcorresponding sentiments are then used to generate explana-tions related to the pros and cons of specific items. Featuresare sorted into pro or con according to whether their valuesare above or below a predetermined threshold. If we assume,for example, a threshold of . , all item features with anexplanation relevance ≥ . are regarded as pros, the othersare regarded as cons. Formula 2 represents an approach todetermine the explanation-relevance ( er ) of a specific feature f i where sentiment represents a group preference with regardto a specific feature and item-sentiment represents the supportof the feature by the item t j . er ( f i ) = sentiment ( f i ) × item - sentiment ( t j , f i ) (2)Opinion mining approaches to explanations can also beextended to groups. An example of applying Formula 2 inthe context of group recommender systems is given in TableIV.This example sketches the generation of explanations in aggregated models scenarios. When determining explanationsin the context of aggregated predictions , explanation relevancecould be determined for each individual user and then aggre-gated using an aggregation function such as Average ( AV G )to select explanations considered most relevant for the group.
Visualization of Content-based Filtering Explanations forGroups:
An alternative to list-based representations of ex-planations is mentioned, for example, in Gedikli et al. [13],where content-based explanations are visualized in the formof tag-clouds . An example of a tag-cloud-based explanationin the context of group recommendation is depicted in Figure4. The used tags are related to the travel domain. In thisscenario, the tag-cloud represents an explanation based onthe aggregated preferences of individual group members. For ategory userweights itemweights explanation-relevance u u u t t t t t t t t cat cat √ √ cat √ √ cat √ ONTENT - BASED FILTERING EXPLANATIONS FOR aggregated predictions . T
HE MOST EXPLANATION - RELEVANT CATEGORIES FOR AN ITEM t k AREMARKED WITH √ .group profile ( gp ) item-sentiments explanation-relevancefeature sentiment t t t t t t t t f f f f √ √ √ √ TABLE IVO
PINION MINING BASED EXPLANATIONS FOR aggregated models . F
EATURES f i WITH THE HIGHEST EXPLANATION - RELEVANCE ARE MARKED WITH √ . example, Leo and
Isa like city tours. One can imagine othervisual encodings in terms of shape, textures, and highlightings[47]. Tag relevance can be determined on the basis of a tagrelevance estimator similar to Formula 1.
Fig. 4. Tag-cloud representation used to show the relevance of tags withregard to a specific item extended with preference information related to groupmembers (
Isa , Joe , and
Leo ). IV. C
ONSTRAINT - BASED R ECOMMENDATION
Constraint-based recommender systems are built upon deepknowledge about items and their corresponding recommen-dation rules (constraints). This information serves as a basisfor explaining item recommendations by analyzing reasoningsteps that led to the derivation of solutions (items) [48]. Suchexplanations follow the tradition of AI-based expert systems[5], [49]. On the one hand, explanations are used to answer how -questions, i.e., questions related to the reasons behind arecommendation. A corresponding analysis is provided, forexample, by Felfernig et al. [23].
How questions are answeredin terms of showing the relationship between defined userrequirements req i and the recommended items. An exampleof such an explanation is ’ item y is recommended, since youspecified the upper price limit with and you preferred light-weight cameras ’ (for details see [23], [48]). Besidesanswering how questions, constraint-based recommenders helpto answer why and why not questions. Explanations for the firsttype are used to provide insights to the user as to why certainquestions have to be answered, whereas explanations for whynot questions help a user to escape from the no solutioncould be found dilemma [50]. Felfernig et al. [23] show thatsuch explanations can help to increase a user’s trust in therecommender application. Furthermore, explanations related to why not questions can increase the perception of item domainknowledge. Explanations in Constraint-based Recommendation forGroups:
Formula 3 represents a simple example of an ap-proach to determine the explanation-relevance ( er ) of userrequirements in constraint-based recommendation scenariosfor groups. A related example is depicted in Table V. Theassumption is that all group members have already agreed onthe set of requirements (cid:83) req j and each group member hasalso specified his/her preference in terms of an importancevalue. An explanation that can be provided to a group in sucha context is ’ requirement req is considered important by thewhole group ’. er ( req j ) = Σ u i ∈ G importance ( req j , u i ) | G | (3)The example explanation shown in Table V does not takeinto account causal relationships between requirements anditems [48]. For example, if a group agrees that the price ofa camera has to be below , and every camera fulfillsthis criteria, the price requirement does not filter out itemsfrom the itemset, so there is no causal relationship betweena recommendation subset of a given itemset and the pricerequirement. Combining Constraints and Utilities:
Constraint-based rec-ommendation is often combined with an additional mechanismthat supports the ranking of candidate items. An examplethereof is Multi-Attribute Utility Theory (MAUT) [51] thatsupports the evaluation of items in terms of a set of interest equirement importance explanationrelevance u u u req req req √ TABLE VE
XPLANATION RELEVANCE OF REQUIREMENTS IN CONSTRAINT - BASED RECOMMENDATION ( aggregated models ). T HE MOST RELEVANT REQUIREMENT ISMARKED WITH √ . dimensions which can be interpreted as generic requirements.For example, in the digital camera domain, output quality is an interest dimension that is related to user requirementssuch as resolution and sensor size . Group members specifytheir preferences with regard to the importance of the interestdimensions dim i . Furthermore, items t j have different contri-butions with regard to these dimensions (see Table VI).Similar to content-based filtering, the item-specific expla-nation relevance ( er ) of individual interest dimensions can bedetermined on the basis of Formula 4 where imp representsthe user-specific importance of an interest dimension dim i and con the contribution of an item to dim i . er ( dim i , t j ) = Σ u k ∈ G ( imp ( u k , dim i ) × con ( t j , dim i )) | G | (4)Following this approach, [39], [52]–[54] show how toapply utility-based approaches to the selection of evaluativearguments , i.e., arguments with the highest relevance. Inthis context, arguments take over the role of the previously-mentioned interest dimensions. Such an approach is providedin the I NTRIGUE system [31], where recommended traveldestinations are explained to groups, and arguments are chosendepending on their utility for individual group members orsubgroups.An example of an argument (as an elementary componentof an explanation) for a car recommended by a constraint-based recommender is ’very energy-efficient’ , where energy-efficiency can be regarded as an interest dimension. Thecontribution of an item to this interest dimension is high if, forexample, the fuel consumption of a car is low. If a customer isinterested in energy-efficient cars and a car is energy efficient,the corresponding argument will be included in the explanation(see the example in Table VI). An example explanation fromanother domain (financial services) is the following: ’ financialservice t is recommended since all group members stronglyprefer low-risk investments ’. Examples of interest dimensionsused in this context are risk , availability , and profit . Consensus in Group Decisions:
Situations can occur wherethe preferences of individual group members become inconsis-tent [41], [55], [56]. In the context of group recommendationscenarios, consensus is defined in terms of disagreementbetween individual group members regarding item evaluations(ratings) [57]. To provide a basis for establishing consensus, In line with Jameson and Smyth [29], we interpret arguments as elementaryparts of explanations. such situations have to be explained and visualized [33], [41].In this context, diagnosis methods [56] can help to determinerepair actions that propose changes to the current set ofrequirements (preferences) such that a recommendation canbe identified. Such repairs are able to take into account theindividual preferences of group members [55]. The potentialof aggregation functions to foster consensus in group decisionmaking is discussed in Salamo et al. [58]. Concepts to take intoaccount consensus in group decision making are also presentedin [57], [59], [60]. In scenarios such as software requirementsengineering [61], there are often misconceptions regarding theevaluation/selection of a specific requirement. For example,there could be misconceptions regarding the assignment of arequirement to a software release. An explanation in such con-texts indicates possible changes of requirements (assignments)that help to restore consistency. In group-based settings, suchrepair-related explanations help group members understand theconstraints of other group members and decide in which waytheir own requirements should be adapted.
User-generated Explanations:
User-generated explanationsare defined by a group member (typically, the creator ofa decision task) to explain, for example, why a specificalternative has been selected. The impact of user-generatedexplanations in constraint-based group recommendation sce-narios was analyzed by Stettinger et. al [62]. The creatorof a decision task (prioritization decisions in the context ofsoftware requirements engineering) had to explain the decisionoutcome verbally. In groups where such explanations wereprovided, this contributed to an increased satisfaction withthe final decision and an increased perceived degree of groupdecision support quality [62]. User-generated explanations arenot limited to constraint-based recommendation. For example,crowd-sourcing based approaches are based on the similar ideaof collecting explanations directly from users.
Fairness Aspects in Groups: Fair recommendations ingroup settings can be characterized as recommendations with-out favoritism or discrimination towards specific group mem-bers . The perceived importance of fairness, depending onthe underlying item domain, has been analyzed in [63]. Anoutcome of this study is that in high-involvement item domains(e.g., decisions regarding new cars, financial services, andapartments), the preferred preference aggregation strategies[64] differ from low-involvement item domains such as restau-rants and movies. The latter are often the domains of repeatedgroup decisions (e.g., the same group selects a restaurantfor a dinner every three months). Groups tend to apply imension importance contribution explanation relevance u u u t t t t t t dim dim √ √ dim √ XPLANATION RELEVANCE OF INTEREST DIMENSIONS IN UTILITY - BASED RECOMMENDATION ( aggregated predictions ). T HE MOST RELEVANTDIMENSION IS MARKED WITH √ . strategies such as Least Misery (LMS), in high involvementitem domains, and to prefer
Average Voting (AVG) in low-involvement item domains. When recommending packages,the task is to recommend a set of items in such a way thatindividual group members perceive the recommendation as fair[65]. One interpretation of fairness stated in Serbos et al. [65]is that there are at least m items included in the package thata group member likes.An approach to take into account fairness in repeated groupdecisions is presented by Quijano-Sanchez et al. [66], whererating predictions are adapted to achieve fairness in futurerecommendation settings. This adaptation also depends on thepersonality of a group member. For example, a group memberwith a strong personality who was treated less favorably lasttime, will be immediately compensated in the upcoming groupdecision. A similar interpretation of fairness is introduced inStettinger et al. [67] where fairness is also defined in the con-text of repeated group decisions, i.e., decisions that repeatedlytake place within the same or stable groups (groups with a lowfluctuation). Fairness in this context is achieved by introducingfunctions that systematically adapt preference weights, i.e.,group members whose preferences were disregarded recentlyreceive higher preference weights in upcoming decisions. Forexample, in the context of repeated decisions (made by thesame group) regarding a restaurant for a dinner, the preferencesof some group members are more often taken into account thanthe preferences of others. In such scenarios, the preferenceweights of individual group members can be adapted [67] (seeFormulae 5–6).Formula 6 provides a fairness estimate per user u i in termsof the share of the number of supported preferences in relationto the number of defined preferences. The lower the value,the less the preferences of a user (group member of group G )have been taken into account, and the lower the correspondingdegree of fairness with regard to u i . Formula 5 reflects anapproach to increasing fairness in upcoming recommendationsessions. If the fairness (Formula 6) in previous sessions waslower than average, a corresponding upgrade of user-specificimportance weights ( w ) takes place for each dimension. Foran example of adapted weights see Table VII. w (cid:48) ( u i , dim j ) = w ( u i , dim j ) × (1+( Σ u ∈ G f air ( u ) | G | − f air ( u i ))) (5) f air ( u i ) = supportedpref erences ( u i ) group decisions (6) Visualization of Constraint-based Explanations for Groups:
An example of visualizing the importance of interest dimen-sions with regard to a final evaluation (utility) is given inFigure 5. Examples of interest dimensions when evaluating,for example, financial services, are risk , profit , and availability . Fig. 5. Visualization of the importance of interest dimensions with regardto the overall item evaluation (the importance values are based on Table VIwhere dim = risk , dim = profit , and dim = availability ). If the degree of fairness of previous group decisions has tobe made transparent to the group, for example, for explainingadaptations regarding the importance weights of individualgroup members, this can be achieved on the basis of avisualization as depicted in Figure 6. An example of a relatedverbal explanation is the following: ’ the interest dimensionsfavored by user u have been given more consideration since u was at a disadvantage in previous decisions ’. Fig. 6. Visualizing the degree of fairness (Formula 6) in repeated group de-cisions (e.g., decisions on restaurant visits). In this example, the visualizationindicates that user u was at a disadvantage in previous decisions. V. C
RITIQUING - BASED R ECOMMENDATION
To assist users in constructing and refining preferences,critiquing-based recommender systems [19] determine recom-mendations based on the similarity between candidate andreference items. For example, in the domain of digital cameras, ser importance (imp) fairness(fair) adapted importance (imp’) dim dim dim dim dim dim u u u N EXAMPLE OF AN ADAPTATION OF INDIVIDUAL USERS ’ WEIGHTS TO TAKE fairness
INTO ACCOUNT . I
N THIS EXAMPLE , THE IMPORTANCE ( imp ) WEIGHTS OF USER u HAVE BEEN INCREASED , THE WEIGHTS OF u REMAIN THE SAME , AND THE WEIGHTS OF USER u HAVE BEEN DECREASED ( THEPREFERENCES OF u HAVE BEEN FAVORED IN PREVIOUS DECISIONS – A VISUALIZATION IS GIVEN IN F IGURE related explanations focus on item attributes such as price , resolution , and optical zoom . System-generated critiques (e.g.,compound critiques [68]) help to explain the relationshipbetween the currently shown reference item and candidateitems. Such explanations have been found to help educateusers and increase their trust in the underlying recommendersystem [69].
Critiquing-based Explanations for Groups: User-definedcritiques , i.e., critiques on the current reference item directlydefined by the user, can be used for the generation of explana-tions for recommended items (see the example in Table VIII).In this context, support ( attribute, t i ) (see Formula 7)indicates how often an item supports a user critique on the attribute . For example, item t supports a critique on price three times since all the critiques on price are consistentwith the price of t , i.e., support ( price, t )=1.0. However, support ( weight, t ) is only . since the weight of t is . which is inconsistent with two related critiques. support ( attribute, t i ) = supportedcritiques ( attribute, t i ) critiques ( attribute ) (7)On the verbal level, an explanation for item t could be ’ theprice of camera t ( ) is clearly within the limits specifiedby the group members. As expected, it has an exchangeablelens. It has a resolution (24) that satisfies the requirementsof u and u , however, u has to accept minor drawbacks.Furthermore, the weight of the camera (1.5) is significantlyhigher than expected by u and u ’.Such explanations can be provided if the preferences ofgroup members are known. Otherwise, explanations have tobe generated on the basis of aggregated models , where itemproperties are compared with the aggregated critiques definedin the group profile. Visualization of Critiquing-based Explanations for Groups:
An example of visualizing the support of different attribute-specific critiques is given in Table IX. The √ symbol denotesthe fact that the user critique on an attribute of item t i issupported by t i .VI. C ONCLUSIONS AND R ESEARCH I SSUES
In this paper, we provided an overview of explanationsthat help single users and groups to better understand itemrecommendations. As has been pointed out in pioneering workby Jameson and Smyth [29], explanations play a crucial role in group recommendation scenarios. We discussed possibil-ities of explaining recommendations in the context of thebasic recommendation paradigms of collaborative filtering,content-based filtering, constraint-based, and critiquing-basedrecommendation, taking into account specific aspects of grouprecommendation scenarios. In order to support a more in-depthunderstanding of how explanations can be determined, weprovided a couple of working examples of verbal explanationsand corresponding visualizations.Although extensively analyzed in the context of single-userrecommendations (see, e.g., Tintarev [15]), the generation ofexplanations for groups entails a couple of open researchissues. Specifically, aspects of group dynamics have to beanalyzed with regard to their role in generating explanations.For example, consensus , fairness , and privacy are majoraspects – the related research question is how to defineexplanations that best help to achieve these goals. Some initialapproaches exist to explain the application of aggregationfunctions in group recommendation contexts (see, e.g., Ntoutsiet al. [34]), however, a more in-depth integration of socialchoice theories into the generation of explanations has to beperformed. This is also true on the algorithmic level, as inthe context of group-based configuration [70]. In this context,the integration of information about personality and emotioninto explanations has to be analyzed. Initial related work canbe found, for example, in Quijano-Sanchez et al. [10] wheresocial factors in groups are taken into account to generate tactful explanations , i.e., explanations that avoid, for example,damaging friendships.Mechanisms that help to increase the quality of groupdecision making processes have to be investigated [71]. For ex-ample, explanations could also be used to trigger intended be-havior in group decision making such as exchange of decision-relevant information among group members [26]. Finally, ex-plaining hybrid recommendations [28] and recommendationsgenerated by matrix factorization (MF) approaches [72], [73]are issues for future research. Summarizing, explanations forgroups is a highly relevant research area with many open issuesfor future work. R EFERENCES[1] J. Herlocker, J. Konstan, and J. Riedl, “Explaining Collaborative Fil-tering Recommendations,” in
ACM Conference on Computer SupportedCooperative Work . ACM, 2000, pp. 241–250.[2] N. Tintarev and J. Masthoff, “Designing and Evaluating Explanations forRecommender Systems,” in
Recommender Systems Handbook , 2011, pp.479–510.ritiques of group members support(attribute, t i )attribute crit( u ) crit( u ) crit( u ) t t t price ≤ ≤ ≤
600 299(1.0) 650 (0.66) 1.200(0.0)res ≥ ≥ ≥
25 24(0.66) 25 (1.0) 30 (1.0)weight ≤ ≤ ≤ RITIQUES OF GROUP MEMBERS AS A BASIS FOR GENERATING EXPLANATIONS FOR ITEM RECOMMENDATIONS . Support
IS DEFINED BY THE SHARE OFATTRIBUTE - SPECIFIC CRITIQUES SUPPORTED BY AN ITEM t i .user attributes( t ) price =299 resolution =24 weight = 1 . exchangeablelens = yu √ √ × √ u √ √ √ √ u √ × × × TABLE IXS
UMMARIZATION OF THE SUPPORT - DEGREE OF USER - SPECIFIC CRITIQUES ON ITEM t .[3] H. Cramer, V. Evers, S. Ramlal, M. V. Someren, L. Rutledge, N. Stash,L. Aroyo, and B. Wielinga, “The Effects of Transparency on Trust inand Acceptance of a Content-based Art Recommender,” User Modelingand User-Adapted Interaction , vol. 18, no. 5, pp. 455–496, 2008.[4] N. Tintarev and J. Masthoff, “Evaluating the Effectiveness of Expla-nations for Recommender Systems,”
User Modeling and User-AdaptedInteraction , vol. 22, no. 4–5, pp. 399–439, 2012.[5] G. Friedrich and M. Zanker, “A Taxonomy for Generating Explanationsin Recommender Systems,”
AIMagazine , vol. 32, no. 3, pp. 90–98, 2011.[6] M. Bilgic and R. Mooney, “Explaining Recommendations: Satisfactionvs. Promotion,” in
ACM IUI 2005 Workshop Beyond Personalization ,San Diego, CA, USA, 2005, pp. 1–6.[7] N. Tintarev, J. O’Donovan, and A. Felfernig, “Human Interaction withArtificial Advice Givers,”
ACM Transactions on Interactive IntelligentSystems , vol. 6, no. 4, pp. 1–10, 2016.[8] L. Chen and F. Wang, “Explaining Recommendations Based on FeatureSentiments in Product Reviews,” in
ACM IUI 2017 . ACM, 2017, pp.17–28.[9] B. Lamche, U. Adig¨uzel, and W. W¨orndl, “Interactive Explanationsin Mobile Shopping Recommender Systems,” in , Foster City,Silicon Valley, California, USA, 2014, pp. 14–21.[10] L. Quijano-Sanchez, C. Sauer, J. Recio-Garc´ıa, and B. D´ıaz-Agudo,“Make it Personal: A Social Explanation System applied to GroupRecommendations,”
Expert Systems with Applications , vol. 76, pp. 36–48, 2017.[11] K. Verbert, D. Parra, P. Brusilovsky, and E. Duval, “Visualizing Recom-mendations to Support Exploration, Transparency and Controllability,”in
International Conference on Intelligent User Interfaces (IUI’13) , NewYork, NY, USA, 2013, pp. 351–362.[12] E. Gansner, Y. Hu, S. Kobourov, and C. Volinsky, “Putting Recom-mendations on the Map: Visualizing Clusters and Relations,” in
ACMConference on Recommender Systems , New York, USA, 2009, pp. 345–348.[13] F. Gedikli, D. Jannach, and M. Ge, “How should I Explain? A Compari-son of Different Explanation Types for Recommender Systems,”
HumanComputer Studies , vol. 72, no. 4, pp. 367–382, 2014.[14] I. Nunes and D. Jannach, “A Systematic Review and Taxonomy ofExplanations in Decision Support and Recommender Systems,”
UserModeling and User-Adapted Interaction (UMUAI) , 2017.[15] N. Tintarev,
Explaining Recommendations . Univ. of Aberdeen, 2009.[16] N. Tintarev and J. Masthoff, “Explaining Recommendations: Design andEvaluation,” in
Recommender Systems Handbook, 2nd Edition , F. Ricci,L. Rokach, and B. Shapira, Eds. Springer, 2015, pp. 353–382.[17] A. Jameson, M. Willemsen, A. Felfernig, M. de Gemmis, P. Lops,G. Semeraro, and L. Chen, “Human Decision Making and Recommender Systems,” in
Recommender Systems Handbook, 2nd Edition , F. Ricci,L. Rokach, and B. Shapira, Eds. Springer, 2015, pp. 611–648.[18] S. Gkika and G. Kekakos, “The Persuasive Role of Explanations inRecommender Systems,” in , 2014, pp. 59–68.[19] L. Chen and P. Pu, “Critiquing-based Recommenders: Survey andEmerging Trends,”
User Modeling and User-Adapted Interaction(UMUAI) , vol. 22, no. 1–2, pp. 125–150, 2012.[20] A. Felfernig, B. Gula, G. Leitner, M. Maier, R. Melcher, S. Schippel, andE. Teppan, “A Dominance Model for the Calculation of Decoy Productsin Recommendation Environments,” in
AISB Symposium on PersuasiveTechnologies , Aberdeen, Scotland, 2008, pp. 43–50.[21] A. Felfernig and R. Burke, “Constraint-based Recommender Systems:Technologies and Research Issues,” in
ACM International Conference onElectronic Commerce (ICEC08) , Innsbruck, Austria, 2008, pp. 17–26.[22] A. Felfernig, E. Teppan, and K. Isak, “Decoy Effects in Financial ServiceE-Sales Systems,” in
RecSys’11 Workshop on Human Decision Makingin Recommender Systems (Decisions@RecSys’11) , Chicago, IL, USA,2011, pp. 1–8.[23] A. Felfernig, B. Gula, and E. Teppan, “Knowledge-based RecommenderTechnologies for Marketing and Sales,”
Special issue of PersonalizationTechniques for Recommender Systems and Intelligent User Interfacesfor the International Journal of Pattern Recognition and ArtificialIntelligence (IJPRAI) , vol. 21, no. 2, pp. 1–22, 2006.[24] K. McCarthy, J. Reilly, L. McGinty, and B. Smyth, “Thinking Positively- Explanatory Feedback for Conversational Recommender Systems,” in
European Conference on Case-Based Reasoning (ECCBR-04) Explana-tion Workshop , 2004, pp. 1–10.[25] A. Felfernig, L. Hotz, C. Bagley, and J. Tiihonen,
Knowledge-basedConfiguration: From Research to Business Cases , 1st ed. Else-vier/Morgan Kaufmann Publishers, 2014.[26] M. Atas, A. Felfernig, M. Stettinger, and T. T. Tran, “Beyond ItemRecommendation: Using Recommendations to Stimulate KnowledgeSharing in Group Decisions,” in , Oxford, UK, 2017, pp. 368–377.[27] A. Felfernig, M. Atas, D. Helic, T. Tran, M. Stettinger, and R. Samer,“Algorithms for Group Recommendation,” in
Group RecommenderSystems: An Introduction , A. Felfernig, L. Boratto, M. Stettinger, andM. Tkalcic, Eds. Springer, 2018, pp. 35–54.[28] P. Kouki, J. Schaffer, J. Pujara, J. O’Donovan, and L. Getoor, “UserPreferences for Hybrid Explanations,” in , Como, Italy, 2017, pp. 84–88.[29] A. Jameson and B. Smyth, “Recommendation to Groups,” in
TheAdaptive Web , ser. Lecture Notes in Computer Science, P. Brusilovsky,A. Kobsa, and W. Nejdl, Eds., 2007, vol. 4321, pp. 596–627.[30] A. Jameson, S. Baldes, and T. Kleinbauer, “Two Methods for EnhancingMutual Awareness in a Group Recommender System,” in
ACM Intl.orking Conference on Advanced Visual Interfaces , Gallipoli, Italy,2004, pp. 447–449.[31] L. Ardissono, A. Goy, G. Petrone, M. Segnan, and P. Torasso,“I
NTRIGUE : Personalized Recommendation of Tourist Attractions forDesktop and Handset Devices,”
Applied Artificial Intelligence: SpecialIssue on Artificial Intelligence for Cultural Heritage and Digital Li-braries , vol. 17, no. 8–9, pp. 687–714, 2003.[32] Y. Chen, “Interface and Interaction Design for Group and SocialRecommender Systems,” in
ACM Conference on Recommender Systems(RecSys’11) , Chicago, IL, 2011, pp. 363–366.[33] A. Jameson, “More than the Sum of its Members: Challenges forGroup Recommender Systems,” in
International Working Conferenceon Advanced Visual Interfaces , 2004, pp. 48–54.[34] E. Ntoutsi, K. Stefanidis, K. Norvag, and H. Kriegel, “Fast GroupRecommendations by Applying User Clustering,” in
ER 2012 , ser.LNCS, vol. 7532, 2012, pp. 126–140.[35] J. Vig, S. Sen, and J. Riedl, “Tagsplanations: Explaining Recommenda-tions Using Tags,” in
ACM IUI 2009 . Sanibel Island, FL,USA: ACM,2009, pp. 47–56.[36] W. Lin, S. Alvarez, and C. Ruiz, “Efficient Adaptive-Support Asso-ciation Rule Mining for Recommender Systems,”
Data Mining andKnowledge Discovery , vol. 6, pp. 83–105, 2002.[37] S. Chang, F. Harper, L. He, and L. Terveen, “CrowdLens: Experiment-ing with Crowd-Powered Recommendation and Explanation,” in .AAAI, 2016, pp. 52–61.[38] T. Ulz, M. Schwarz, A. Felfernig, S. Haas, A. Shehadeh, S. Reiterer,and M. Stettinger, “Human Computation for Constraint-based Recom-menders,”
Journal of Intelligent Information Systems (JIIS) , vol. 49,no. 1, pp. 37–57, 2017.[39] P. Symeonidis, A. Nanopoulos, and Y. Manolopoulos, “Providing Jus-tifications in Recommender Systems,”
IEEE Transactions on Systems,Man, and Cybernetics , vol. 38, pp. 1262–1272, 2008.[40] M. O’Connor, D. Cosley, J. Konstan, and J. Riedl, “PolyLens: A Rec-ommender System for Groups of Users,” in , 2001, pp. 199–218.[41] N. Mahyar, W. Liu, S. Xiao, J. Browne, M. Yang, and S. Dow,“Consensus: Visualizing points of disagreement for multi-criteria collab-orative decision making,” in
ACM Conference on Computer SupportedCooperative Work and Social Computing . ACM, 2017, pp. 17–20.[42] I. Palomares, L. Martinez, and F. Herrera, “MENTOR: A GraphicalMonitoring Tool of Preferences Evolution in Large-Scale Group Deci-sion Making,”
Knowledge-Based Systems , vol. 58, pp. 66–74, 2014.[43] H. Lieberman, N. Dyke, and A. Vivacqua, “Let’s Browse: A Col-laborative Web Browsing Agent,” in , Los Angeles, CA, USA, 1999, pp. 65–68.[44] B. Miller, I. Albert, S. Lam, J. Konstan, and J. Riedl, “MovieLensUnplugged: Experiences with a Recommender System on Four MobileDevices,” in
People and Computers XVII — Designing for Society ,E. O’Neill, P. Palanque, and P. Johnson, Eds. London: Springer, 2004,pp. 263–279.[45] K. Muhammad, A. Lawlor, and B. Smyth, “A Live-User Study ofOpinionated Explanations for Recommender Systems,” in . ACM, 2016,pp. 256–260.[46] R. Dong, M. Schaal, M. O’Mahony, and B. Smyth, “Topic Extractionfrom Online Reviews for Classification and Recommendation,” in .AAAI, 2013, pp. 1310––1316.[47] E. Knutov, P. DeBra, and M. Pechenizkiy, “AH 12 Years Later: A Com-prehensive Survey of Adaptive Hypermedia Methods and Techniques,”
New Review of Hypermedia and Multimedia , vol. 15, no. 1, pp. 5–38,2009.[48] G. Friedrich, “Elimination of Spurious Explanations,” in , 2004, pp. 813–817.[49] B. Buchanan and E. Shortliffe,
Rule-Based Expert Systems: The MYCINExperiments of the Stanford Heuristic Programming Project . Addison-Wesley, 1984.[50] A. Felfernig, M. Schubert, G. Friedrich, M. Mandl, M. Mairitsch, andE. Teppan, “Plausible Repairs for Inconsistent Requirements,” in ,Pasadena, CA, 2009, pp. 791–796.[51] D. Winterfeldt and W. Edwards,
Decision Analysis and BehavioralResearch . Cambridge University Press, 1986. [52] G. Carenini and J. Moore, “Generating and evaluating Evaluative Argu-ments,”
Artificial Intelligence , vol. 170, no. 11, pp. 925 – 952, 2006.[53] A. Felfernig, B. Gula, G. Leitner, M. Maier, R. Melcher, and E. Teppan,“Persuasion in Knowledge-based Recommendation,” in , ser. LNCS. Springer, 2008, pp. 71–82.[54] J. Teze, S. Gottifredi, A. Garcia, and G. Simari, “ImprovingArgumentation-based Recommender Systems through Context-adaptableSelection Criteria,”
Journal of Economic Perspectives , vol. 42, no. 21,pp. 8243–8258, 2015.[55] A. Felfernig, M. Atas, T. T. Tran, and M. Stettinger, “Towards Group-based Configuration,” in
International Workshop on Configuration 2016(ConfWS’16) , 2016, pp. 69–72.[56] A. Felfernig, M. Schubert, and C. Zehentner, “An Efficient DiagnosisAlgorithm for Inconsistent Constraint Sets,”
Artificial Intelligence forEngineering Design, Analysis, and Manufacturing (AIEDAM) , vol. 26,no. 1, pp. 53–62, 2012.[57] S. Amer-Yahia, S. Roy, A. Chawla, G. Das, and C. Yu, “Group Rec-ommendation: Semantics and Efficiency,” in
VLDB’09 , Lyon, France,2009, pp. 754–765.[58] M. Salamo, K. McCarthy, and B. Smyth, “Generating Recommendationsfor Consensus Negotiation in Group Personalization Services,”
Personaland Ubiquitous Computing , vol. 16, no. 5, pp. 597–610, 2012.[59] J. Castro, J. Lu, G. Zhang, Y. Dong, and L. Mart´ınez, “OpinionDynamics-Based Group Recommender Systems,”
IEEE Transactions onSystems, Man, and Cybernetics: Systems , vol. 99, pp. 1–13, 2017.[60] J. Castro, F. Quesada, I. Palomares, and L. Mart´ınez, “A Consensus-Driven Group Recommender System,”
Intelligent Systems , vol. 30, no. 8,pp. 887–906, 2015.[61] G. Ninaus, A. Felfernig, M. Stettinger, S. Reiterer, G. Leitner,L. Weninger, and W. Schanil, “I
NTELLI R EQ : Intelligent Techniquesfor Software Requirements Engineering,” in Prestigious Applications ofIntelligent Systems Conference (PAIS) , 2014, pp. 1161–1166.[62] M. Stettinger, A. Felfernig, G. Leitner, and S. Reiterer, “CounteractingAnchoring Effects in Group Decision Making,” in , ser.LNCS, vol. 9146, Dublin, Ireland, 2015, pp. 118–130.[63] A. Felfernig, M. Atas, T. T. Tran, M. Stettinger, and S. Polat-Erdeniz,“An Analysis of Group Recommendation Heuristics for High- andLow-Involvement Items,” in
International Conference on Industrial,Engineering and Other Applications of Applied Intelligent Systems(IEA/AIE 2017) , Arras, France, 2017, pp. 335–344.[64] J. Masthoff, “Group Recommender Systems: Combining IndividualModels,”
Recommender Systems Handbook , pp. 677–702, 2011.[65] D. Serbos, S. Qi, N. Mamoulis, E. Pitoura, and P. Tsaparas, “Fairness inPackage-to-Group Recommendations,” in
WWW’17 . ACM, 2017, pp.371–379.[66] L. Quijano-Sanchez, J. Recio-Garc´ıa, B. D´ıaz-Agudo, and G. Jim´enez-D´ıaz, “Social Factors in Group Recommender Systems,”
ACM Trans.on Intell. Sys. and Tech. , vol. 4, no. 1, pp. 8:1–8:30, 2006.[67] M. Stettinger, “C
HOICLA : Towards Domain-independent Decision Sup-port for Groups of Users,” in , Foster City, California, USA, 2014, pp. 425–428.[68] K. McCarthy, J. Reilly, L. McGinty, and B. Smyth, “ On the DynamicGeneration of Compound Critiques in Conversational RecommenderSystems,” in
International Conference on Adaptive Hypermedia andAdaptive Web-Based Systems . Springer, 2004, pp. 176–184.[69] P. Pu and L. Chen, “Trust-inspiring Explanation Interfaces for Recom-mender Systems,”
Knowledge-based Systems , vol. 20, no. 6, pp. 542–556, 2007.[70] A. Felfernig, M. Stettinger, G. Ninaus, M. Jeran, S. Reiterer, A. Falkner,G. Leitner, and J. Tiihonen, “Towards open configuration,” in , Novi Sad, Serbia, 2014, pp. 89–94.[71] J. Konstan and J. Riedl, “Recommender Systems: From Algorithmsto User Experience,”
User Modeling and User-Adapted Interaction(UMUAI) , vol. 22, no. 1, pp. 101–123, 2012.[72] B. Abdollahi and O. Nasraoui, “Using Explainability for ConstrainedMatrix Factorization,” in , Como, Italy, 2017, pp. 79–83.[73] B. Rastegarpanah, M. Crovella, and K. Gummadi, “Exploring Expla-nations for Matrix Factorization Recommender Systems, FatRec Work-shop,” in11th ACM Conference on Recommender Systems