div2vec: Diversity-Emphasized Node Embedding
Jisu Jeong, Jeong-Min Yun, Hongi Keam, Young-Jin Park, Zimin Park, Junki Cho
ddiv2vec: Diversity-Emphasized Node Embedding
Jisu Jeong
Clova AI Research, NAVER Corp.Seongnam, South [email protected]
Jeong-Min Yun
WATCHA Inc.Seoul, South [email protected]
Hongi Keam
WATCHA Inc.Seoul, South [email protected]
Young-Jin Park
Naver R&D Center, NAVER Corp.Seoul, South [email protected]
Zimin Park
WATCHA Inc.Seoul, South [email protected]
Junki Cho
WATCHA Inc.Seoul, South [email protected]
ABSTRACT
Recently, the interest of graph representation learning has beenrapidly increasing in recommender systems. However, most existingstudies have focused on improving accuracy, but in real-worldsystems, the recommendation diversity should be considered as wellto improve user experiences. In this paper, we propose the diversity-emphasized node embedding div2vec , which is a random walk-basedunsupervised learning method like D W and node2vec . Whengenerating random walks, D W and node2vec sample nodesof higher degree more and nodes of lower degree less. On theother hand, div2vec samples nodes with the probability inverselyproportional to its degree so that every node can evenly belong tothe collection of random walks. This strategy improves the diversityof recommendation models. O ine experiments on the MovieLensdataset showed that our new method improves the recommendationperformance in terms of both accuracy and diversity. Moreover, weevaluated the proposed model on two real-world services, WATCHAand LINE Wallet Coupon, and observed the div2vec improves therecommendation quality by diversifying the system. CCS CONCEPTS • Computing methodologies ! Learning latent representa-tions ; Machine learning algorithms ; Knowledge representationand reasoning ; Neural networks . KEYWORDS graph representation learning, node embedding, diversity, recom-mender system, link prediction, live test
Most recommender system studies have focused on nding users’immediate demands; they try to build models that maximize theclick-through rate (CTR). The learned system suggests high-rankeditems that users are likely to click in a myopic sense [6, 9, 30, 32].Such recommendation strategies have successfully altered simplepopularity-based or handmade lists, thus being widely adopted onmany online platforms including Spotify [15], Net ix [18], and soon. Proceedings of the ImpactRS Workshop at ACM RecSys ’20, September 25, 2020, VirtualEvent, Brazil.Copyright (c) 2020 for this paper by its authors. Use permitted under Creative CommonsLicense Attribution 4.0 International (CC BY 4.0). . However, the previous approaches have a potentially severedrawback, a lack of diversity. For example, consider a user justwatched Iron Man. Since a majority of people tend to watch otherMarvel Cinematic Universe (MCU) lms like Iron Man 2, Thor, andMarvel’s The Avengers after watching Iron Man, the system wouldrecommend such MCU lms based on historical log data. Whilethe approach may lead to CTR maximization, 1) can we say thatusers are satis ed with these apparent results? Or, 2) would a widervariety of recommendations achieve better user experience?Recently, a method that addresses the rst question is presentedon Spotify [2]. This work categorized those who listen to very sim-ilar songs and di erent sets of entities as specialists and generalists ,respectively. This work observed that generalists are much moresatis ed than specialists based on long-term user metrics (i.e., theconversion to subscriptions and retention on the platform). Thus,even if some users are satis ed with the evident recommendations(clicked or played), this satisfaction does not imply the users con-tinue to use the platform.To answer the second question, we propose the diversity-emphasizednode embedding div2vec . Recently, the number of studies on graphstructure [8, 10, 17, 24, 26, 28] is increasing. Unfortunately, most ofthose studies have merely focused on the accuracy. D W and node2vec are the rst and the most famous node embedding meth-ods [8, 24]. D W , node2vec , and our method div2vec generatesrandom walks rst and then use the Skip-gram model [20] to com-pute embedding vectors of all nodes. When generating randomwalks, their methods choose nodes of high degree more. It makessense because if a node had many neighbors in the past, it will havemany neighbors in the future, too. However, it may be an obstaclefor personalizing. Using our new method, all nodes are evenly dis-tributed in the collection of random walks. Roughly speaking, thekey idea is to choose a node with weight where is the degreeof the node. Also, we propose a variant of div2vec , which we call rooted div2vec , obtained by changing the weight to p in order tobalance accuracy and diversity. To the best of our knowledge, ourapproach is the rst node embedding method focusing on diversity.Details are in Section 3.We evaluate our new methods on a benchmark and two real-word datasets. As the benchmark, we verify the o ine metrics onthe MovieLens dataset. As a result, div2vec got higher scores in thediversity metrics, such as coverage, entropy-diversity, and averageintra-list similarity, and lower scores in the accuracy metrics, such igure 1: Screenshots of WATCHA and LINE Wallet Coupon. as AUC score, than D W and node2vec . Furthermore, its vari-ant rooted div2vec had the highest AUC score and also the diversityscores of rooted div2vec are the best or the second-best.We gure out that increasing diversity actually improves onlineperformance. We test on two di erent live services, WATCHA andLINE Wallet Coupon. Screenshots of the services are in Figure 1.WATCHA is one of the famous OTT streaming services in SouthKorea. Like Net ix, users can watch movies and TV series usingWATCHA. LINE is the most popular messenger in Japan, and LINEWallet Coupon service provides various coupons, such as, pizza, cof-fee, shampoo, etc. In the above two di erent kinds of recommendersystems, we used our diversity-emphasized node embedding andsucceeded to enhance online performances. It is the biggest contri-bution of our work to prove that users in real world prefer a diverseand personalized recommendation.The structure of the paper is as follows. In Section 2, we reviewrandom walk-based node embedding methods and the study ondiversity problems. The proposed method will be described in Sec-tion 3. Section 4 and Section 5 reports the results of our experimentson o ine tests and online tests, respectively. Section 6 concludesour research. The famous word2vec method transforms words into embeddingvectors such that similar words have similar embeddings. It usesa language model, called Skip-gram [20], that maximizes the co-occurrence probability among the words near the target word.Inspired by word2vec , Perozzi et al. [24] introduced D W thattransforms nodes in a graph into embedding vectors. A walk is asequence of nodes in a graph such that two consecutive nodes areadjacent. A random walk is a walk such that the next node in thewalk is chosen randomly from the neighbors of the current node. D W rst samples a collection of random walks from the in-put graph where each node in random walks are chosen uniformlyat random. Once a collection of random walks is generated, wetreat nodes and random walks as words and sentences, respectively.Then by applying word2vec method, we can obtain an embeddingvector of each node. node2vec [8] is a generalization of D W . When nodes inrandom walks are chosen, node2vec uses two parameters ? and @ .Suppose we have an incomplete random walk E , E , . . . , E and wewill choose one node in the neighborhood ( E ) of E to be E + . Here, for G in ( E ) , we set the weight F ( E , G ) as follows: F ( E , G ) = ? if G = E , if G is adjacent to E , @ otherwise.Note that if a graph is bipartite, the second case does not appear. node2vec chooses E + at random with the weight F ( E , G ) .The most advantage of graph representation learning or graphneural networks is that these models can access both local andhigher-order neighborhood information. However, as the numberof edges is usually too many, they may be ine cient. The randomwalk-based method solves this problem. Instead of considering allnodes and all edges, it only considers the nodes in the collectionof random walks. Therefore, the way to generate random walks isimportant and it a ects performance. The word “ lter bubble” refers to a phenomenon in which therecommender system blocks providing various information and lters only information similar to the user’s taste. In [3, 21, 22],they show the existence of the lter bubble in their recommendersystem. Some research [1, 27] claim that diversity is one of theessential components in the recommender system.Some studies are proving that diversity increases the user’s satis-faction. Spotify, one of the best music streaming services, observedthat diverse consumption behaviors are highly associated withlong-term metrics like conversion and retention [2, 13]. Also, Liuet al. [14] improve the user’s preference by using neural graph ltering which learns diverse fashion collocation.One may think that if a recommender system gains diversity,then it looses the accuracy. However, the following research suc-ceeds in improving both. Adomavicius and Kwon [1] applied aranking technique to original collaborative ltering in order to in-crease diversity without decreasing the accuracy. Zheng et al. [31]proposed a Deep Q-Learning based reinforcement learning frame-work for news recommendation. Their model improves both click-through rate and intra-list similarity. In the framework of D W and node2vec , the model rst gen-erates a collection of random walks, and then runs the famous word2vec algorithm to obtain embedding vectors. In their way, a) D W , movieId (b) D W , userId(c) rooted div2vec , movieId (d) rooted div2vec , userId(e) div2vec , movieId (f) div2vec , userId Figure 2: The G -axis are nodes and the blue line means the degree of nodes. The ~ -axis denotes the frequency of nodes in thecollection of random walks. nodes of high degree should be contained more than nodes of lowdegree in the collection of random walks because, roughly speaking,if a node E has neighbors, then there are chances that E canbelongs to the collection of random walks. Figure 2a and Figure 2brepresent this phenomenon. The G -axis are the nodes sorted by thedegree and the blue line means the degree of nodes. So, the blue lineis always increasing and it means that nodes of higher degrees areon the right side in each gure. Orange bars mean the frequenciesof nodes in the collection of random walks. It is easy to observethat nodes of high degree appear extremely more than that of lowdegree.As the collection of random walks are the training set of the skim-gram model, D W and node2vec might be trained with a biasto nodes of high degree. It may not be a trouble for solving problemsfocused on the accuracy. For example, in link prediction, high-degree nodes in the original graph might have a higher probabilityof being linked with other nodes than low-degree nodes. In termsof movies, if the movie is popular, then many people will like thismovie. However, it must be a problem when we want to focuson personalization and diversity. Unpopular movies might not betrained enough so that they are not well-represented. So, even ifa person actually prefers some unpopular movie to some popularmovie, the recommender system tends to recommend the popularmovie for safe.Motivated by Figure 3, which shows the di erence betweenreality and equity, we decided to consider the degree of the nextcandidate nodes inversely. The main idea is ‘Low degree, choosemore.’. We propose a simple but creative method, which will beformally described in the next subsection, which gives Figure 2eand Figure 2f. Compare to Figure 2a and Figure 2b, the nodes in Figure 2e and Figure 2f are evenly distributed regardless of theirdegree. Now, we introduce the diversity-emphasized node embedding method.Similarly to D W and node2vec , we rst produce a lot of ran-dom walks and train skip-gram model. We apply an easy but brightidea to generate random walks so that our model can capture thediversity of the nodes in their embedding vectors.Suppose a node E is the last node in an incomplete random walkand we are going to choose the next node among the neighborsof E . • D W picks the next node in ( E ) at random with thesame probability. • In node2vec , if F is the node added to the random walk justbefore E , then there are three types of probability dependon whether a node D ( E ) is adjacent with F or not, or D = F . • Our method will choose the next node according to thedegree of neighbors.Formally, our method chooses the next node D ( E ) with theprobability ( deg ( D )) Õ F ( E ) ( deg ( F )) for some function . For example, when ( G ) = / G , if G has twoneighbors ~ and I whose degree is and respectively, then ~ ischosen with probability ( / )/( / + / ) = . and I is chosenwith probability . . That is, since the degree of ~ is smaller thanthe degree of I , the probability that ~ is chosen is larger than the igure 3: There are three people of di erent heights. The concept of equality is to give the same number of boxes to all.However, in reality, the rich get richer and the poor get poorer. As a perspective of equity, the small person get more boxesthan the tall person. probability that I is chosen. In Section 4, we set to the inverseof the identity function ( G ) = / G and the inverse of the squareroot function ( G ) = / p G . We call this method div2vec when ( G ) = / G and rooted div2vec when ( G ) = / p G .Intuitively, D W chooses the next node without consider-ing the past or the future nodes, node2vec chooses the next nodeaccording to the past node, and div2vec chooses the next nodewith respect to the future node. Note that it is possible to com-bine node2vec and div2vec by rst dividing into three types andthen consider the degree of neighbors. Since there are too manyhyperparameters to control, we remain it to the next work.Figure 2 is the result for generating random walks with severalmethods. The detail for the dataset is in Subsection 4.1. If we use D W , then Figure 2a and Figure 2b show that high-degreenodes appears extremely much more than low-degree nodes. Theproblem is that, if the result is too skew, then the skip-gram modelmight not train some part of data well. For example, a popularmovie will appear many times in the collection of random walksand then the model should over t to the popular movie. On theother hands, an unpopular movie will appears only few times inthe collection of random walks and then the model should under tto the unpopular movie.This problem is solvable by using our method. Using div2vec ,we can have the nodes evenly in the collection of random walks.Figure 2e and Figure 2f show that our method solves this problem.The nodes are chosen equally regardless of the degree of nodes. Nor-mally, popular movies are consumed more than unpopular movies.So div2vec may decrease the accuracy. Our experiments prove thateven if we emphasize the diversity, the accuracy decrease very little.Furthermore, we suggest the variant rooted div2vec . Figure 2c andFigure 2d can be treated as the combination of D W and div2vec . In Subsection 4.4, our experiments show that compare to D W and node2vec , rooted div2vec records better scores forevery metric. We used the famous MovieLens dataset [11] for an o ine test. Weused MovieLens-25M because it is the newest data and we onlyused the recent 5 years in the dataset. Rating data is made on This
10 steps, but we need binary data, which means watched/not orsatis ed/unsatis ed, in order to train a model and compute AUCscore. We set more than 4 stars to be positive and less than 3 stars tobe negative. To prevent noises, we remove both the movies havingless than 10 records and the users having less than 10 or more than1000 records. At last, there are 2,009,593 records with 16,002 usersand 5,298 movies. For the test set, 20% of the data are used.To compute intra-list similarity, which will be described in Sub-section 4.3, we use ‘Tag Genome’ [29] from MovieLens-25M. Itcontains data in ‘movieId, tagId, relevance’ format for every pairof movies and tags. Relevance values are real numbers between0 and 1. So, it can be treated as a dense matrix and one row thatrepresents one movie means a vector containing tag information. In movie recommender systems, a model recommends a list ofmovies to each user. In other words, a model needs to nd out whichmovies will be connected with an individual user. It means thatour task is a link prediction. However, the methods we discussedso far are only compute node embeddings. That is, we have anembedding vector for movies and users but not for their interactions.Grover and Leskovec [8] introduced four operators to obtain edgeembeddings from node embeddings as follows. Let D and E be twonodes and ( D ) and ( E ) be their embedding vectors.(1) Average: ( D )+ ( E ) (2) Hadamard: ( D ) ⇤ ( E ) (element-wise product)(3) Weighted-L1: | ( D ) ( E )| (4) Weighted-L2: | ( D ) ( E )| For each edge, we obtain 64-dim vector from the graph inducedby positive edges and 64-dim vector from the graph induced bynegative edges. And then we concatenate the positive edge embed-ding vector and the negative edge embedding vector to representthe edge embedding vector.To avoid disrupting the performance of a prediction model, weuse simple deep neural network with one hidden layer of size 128.
For each embedding and each operator, we compute four metrics,one for accuracy and the others for diversity. The larger score meansthe better performance.ethod AUC CO(1) ED(1) CO(10) ED(10) ILS(10) CO(50) ED(50) ILS(50) D W n2v-(1,2) n2v-(2,1) div2vec rooted div2vec (a) The results with the operator Weighted-L1. Method AUC CO(1) ED(1) CO(10) ED(10) ILS(10) CO(50) ED(50) ILS(50) D W n2v-(1,2) n2v-(2,1) div2vec rooted div2vec (b) The results with the operator Weighted-L2. Method AUC CO(1) ED(1) CO(10) ED(10) ILS(10) CO(50) ED(50) ILS(50) D W n2v-(1,2) n2v-(2,1) div2vec rooted div2vec (c) The results with the operator Hadamard. Method AUC CO(1) ED(1) CO(10) ED(10) ILS(10) CO(50) ED(50) ILS(50) D W n2v-(1,2) n2v-(2,1) div2vec
882 4.730973 2575 6.130730 rooted div2vec
831 4.695825 2481 6.083932 0.742770 (d) The results with the operator Average.
Table 1: The results on an o line test.AUC SCORE AUC score is area under the Receiver OperatingCharacteristic curve, which is plotting True Positive Rate (TPR)against False Positive Rate (FPR) at various thresholds. TPR andFPR are de ned as follow:TPR = ( C⌘4 =D<14A > 5 CAD4 ?>B8C8E4 )( C⌘4 =D<14A > 5 CAD4 ?>B8C8E4 ) + (
C⌘4 =D<14A > 5 5 0;B4 =460C8E4 ) , FPR = ( C⌘4 =D<14A > 5 5 0;B4 ?>B8C8E4 )( C⌘4 =D<14A > 5 5 0;B4 ?>B8C8E4 ) + (
C⌘4 =D<14A > 5 CAD4 =460C8E4 ) . AUC score is in range 0 to 1. As close to 1, it gives better evaluationand is close to perfect prediction. AUC score is useful evaluationmetric because of scale invariant and classi cation threshold in-variant to compare multiple prediction model. COVERAGE
Coverage is how many items appear in the recom-mended result. Formally, we can de ne ascoverage ( " ) = ÿ D ' " , : ( D ) where " is a model, ' " , : is a set of top- : recommended itemsfor a user D by " . Many papers [1, 7, 12, 16] discuss the impor-tance of the coverage. If the coverage of the model is large, thenthe model recommends a broad range of items, and it implicitly means that users can have di erence items. Furthermore, if theaccuracy is competitive, then we may say that the model is good atpersonalization. ENTROPY-DIVERSITY
Adomavicius and Kwon [1] introducedthe entropy-based diversity measure
Entropy-Diversity . Let * bethe set of all users, and A42 " , : ( ) be the number of users D suchthat ' " , : ( D ) for a model " , an integer : , and an item . ThenENTROPY-DIVERSITY ( " ) = ’ ✓ A42 ( ) : ⇥ | * | ◆ ln ✓ A42 ( ) : ⇥ | * | ◆ . Note that we can say that if ENTROPY-DIVERSITY ( " ) < ENTROPY-DIVERSITY ( " ) , then " recommends more diverse items than " .Here is an example. For an item set = { , , . . . , } and a user set * = { DB4A , DB4A , DB4A } , suppose a model " gives ' " , ( D ) = { , , } for every user D , and a model " gives ' " , ( DB4A ) = { , , } , ' " , ( DB4A ) = { , , } , ' " , ( DB4A ) = { , , } . ThenENTROPY-DIVERSITY ( " ) = ( / ) ln ( / ) ⇥ + ⇥ = ln 3 andENTROPY-DIVERSITY ( " ) = ( / ) ln ( / ) ⇥ = ln 9 . AVERAGE INTRA-LIST SIMILARITY
From the recommen-dation model, every user will receive a list of items.
Intra-List Simi-larity (ILS) measures how dissimilar or similar items in the list are.eek clicks plays watch timethe rst week 66.19 62.07 3.58the second week 39.69 28.52 4.19 Table 2: Percentage of improvements (%) of div2vec over node2vec . Formally, ILS ( ! ) = Õ ! Õ ! , < ( , )| ! |(| ! | )/ where ! is the recommended item list and ( , ) is the similaritymeasure between the tag-genome vectors of and , which aregiven from the MovieLens dataset [11, 29]. We set ( , ) = E · E || E ||·|| E || where E and E are corresponding tag-genome vectorsof and . By de nition, if the value is small, then items in thelist are similar. Otherwise, they are dissimilar. Note that Bradlyand Smyth [4], and Meymandpour and Davis [19] use the samede nition in terms of ‘diversity’. For every user D , we compute !( ( ' " , : ( D )) and their average, which we call Average Intra-ListSimilarity in order to measure how diverse a model is. line test Table 1 summarizes the results of our o ine experiments on theMovieLens dataset. We did many experiments under various condi-tions. • ve methods: D W [24], node2vec [8] with di erenthyperparameters, div2vec and its variant rooted div2vec • four operators: Weighted-L1, Weighted-L2, Hadamard, Aver-age • four metrics: AUC score, coverage, entropy-diversity, andaverage intra-list similarity • three sizes of recommendation lists: , , AUC means AUC score. CO( : ), ED( : ), ILS( : ) means coverage, entropy-diversity, average intra-list similarity of top : recommended items,respectively. n2v-(p,q) is node2vec with hyperparameter ? , @ .Our proposed methods div2vec and rooted div2vec record thehighest scores on all metrics in Table 1a and Table 1b In Table 1cand Table 1d, the average intra-list similarity is not the best but thesecond with tiny gaps. Overall, it is easy to see that div2vec and rooted div2vec got better scores than D W and node2vec indiversity metrics. Furthermore, rooted div2vec got the best scoresin the accuracy metric. Thus, we can conclude that our proposedmethods increase the diversity of recommender systems. In the previous experiments, we veri ed that our methods, div2vec and rooted div2vec , clearly increase the diversity of recommended re-sults. The remaining job is to prove that div2vec actually increasesuser satisfaction in real-world recommender systems. To show this,we conduct an A/B test in the commercial video streaming service,WATCHA, and measure and compare various user activity statisticsthat are related to user satisfaction.We collected four months watch-complete logs starting fromJanuary 2020, here watch-complete means user completing a video. We lter-out users who do not have watch-complete logs last fewdays, also lter-out extreme case users (too many or too few logs);results in 21,620 users. Two methods, node2vec and div2vec , weretrained with these ltered logs. For node2vec , we set the parameters ? = @ = .An A/B test had been conducted two weeks, where 21,620 userswere randomly and evenly partitioned into two groups and eachgroup received either node2vec or div2vec recommending results. Inmore detail, WATCHA has list-wise recommendation home whoselist consists of several videos, and our list inserted into the fthrow. To make the list, we sorted all available videos by the nalscores and pick top : of them ( : varies with devices), and alsoapply random shu ing of top : videos to avoid always the samerecommendation.We compare clicks and plays of node2vec and div2vec list by theweek. (The rst two columns in Table 2) In the rst week, div2vec listreceived more than 60% more actions than node2vec list in bothclicks and plays; 39.69% more clicks and 28.52% more plays at thesecond week. As we can see that div2vec beats node2vec with clicksand plays by a signi cant margin.Someone may argue that the above improvement does not im-prove actual user satisfaction; if users who received the node2vec listare satis ed with other recommended list. To see this actually hap-pens, we compare total watch time of each group during the test.(The last column in Table 2) In the rst week, div2vec achieved 3.58%more watch time than node2vec , and 4.19% in the second week. Letme note that in watch time comparison, even 1% improvementis hard to achieve [5], thus our improvement is quite impressiveresults. To further demonstrate the e ectiveness of using the div2vec em-bedding in other real-world recommender systems, we run an A/Btest in the LINE Wallet Coupon service and evaluate the online per-formance for two weeks in the spring of 2020. The system consistsof over six million users and over ve hundred unique items. Weconstructed the user-item bipartite graph by de ning each user anditem as an individual node and connecting nodes that are interactedeach other. Using the graph, we obtained div2vec embedding vec-tors for each nodes. In this experiment, we compared the numberof unique users clicked (Click UU) and click-through rate (CTR)of the neural-network based recommendation model using theprecomputed div2vec embedding vectors as additional features tothe model that did not. As side information, the paper utilized gen-der, age, mobile OS type, and interest information for users, whilebrand, discount information, text, and image features for items. Theonline experiment results show that the overall Click UU and CTRhave been improved by 6.55% and 2.90%, respectively. The relative The details in model architecture for the LINE Wallet Coupon recommender systemare presented in [23, 25]. a) Relative Click UU (b) Relative CTR
Figure 4: Relative performance of the model applying div2vec feature to the existing model in LINE Wallet Coupon service bydate. performance for two weeks is illustrated in Figure 4 by date. Byapplying the div2vec feature, a larger number of users get interestedin the recommended coupon list and the ratio that the user reactsto the exposed item increases, signi cantly. Considering that theonline tests were conducted for a relatively long period, we con-clude that the diversi ed recommendation based on the proposedmethod has led to positive consequences in user experience ratherthan to attract curiosity from users temporarily. We have introduced the diversity-emphasized node embedding div2vec . Several experiments showed the importance of our method.Compared to D W and node2vec , the recommendation modelbased on div2vec increased the diversity metrics like coverage,entropy-diversity, average intra-list similarity in the MovieLensdataset. The main contribution of this paper is that we veri ed thatusers satisfy with the recommendation model using div2vec in twodi erent live services.We remark that as div2vec is an unsupervised learning methodlike word2vec , it can be easily combined with other studies andservices, and it is possible to improve their performance. Also, bychanging the function , the distribution of nodes in the collectionof random walks can be adjusted to each domain. ACKNOWLEDGMENTS
Special thanks to those who lent their insight and technical supportfor this work, including Jaehun Kim, Taehyun Lee, Kyung-Min Kim,and Jung-Woo Ha.
REFERENCES [1] Gediminas Adomavicius and YoungOk Kwon. 2012. Improving Aggregate Rec-ommendation Diversity Using Ranking-Based Techniques.
IEEE Trans. on Knowl.and Data Eng.
24, 5 (May 2012), 896–911. https://doi.org/10.1109/TKDE.2011.15[2] Ashton Anderson, Lucas Maystre, Ian Anderson, Rishabh Mehrotra, and MouniaLalmas. 2020. Algorithmic e ects on the diversity of consumption on spotify. In Proceedings of The Web Conference 2020 . 2155–2165.[3] Eytan Bakshy, Solomon Messing, and Lada A. Adamic. 2015. Expo-sure to ideologically diverse news and opinion on Facebook.
Sci-ence -policy correction for a REINFORCE recommender system. In Proceedings of the Twelfth ACM International Conference on Web Searchand Data Mining . 456–464.[6] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra,Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, RohanAnil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah.2016. Wide Deep Learning for Recommender Systems. In
Proceedings of the1st Workshop on Deep Learning for Recommender Systems (Boston, MA, USA) (DLRS 2016) . Association for Computing Machinery, New York, NY, USA, 7–10.https://doi.org/10.1145/2988450.2988454[7] Mouzhi Ge, Carla Delgado-Battenfeld, and Dietmar Jannach. 2010. BeyondAccuracy: Evaluating Recommender Systems by Coverage and Serendipity. In
Proceedings of the Fourth ACM Conference on Recommender Systems (Barcelona,Spain) (RecSys ’10) . Association for Computing Machinery, New York, NY, USA,257–260. https://doi.org/10.1145/1864708.1864761[8] Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable Feature Learningfor Networks. In
Proceedings of the 22nd ACM SIGKDD International Conferenceon Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD’16) . Association for Computing Machinery, New York, NY, USA, 855–864. https://doi.org/10.1145/2939672.2939754[9] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017.DeepFM: A Factorization-Machine Based Neural Network for CTR Prediction.In
Proceedings of the 26th International Joint Conference on Arti cial Intelligence (Melbourne, Australia) (IJCAI’17) . AAAI Press, 1725–1731.[10] William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive RepresentationLearning on Large Graphs. In Proceedings of the 31st International Conference onNeural Information Processing Systems (Long Beach, California, USA) (NIPS’17) .Curran Associates Inc., Red Hook, NY, USA, 1025–1035.[11] F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: Historyand Context.
ACM Trans. Interact. Intell. Syst.
5, 4, Article 19 (Dec. 2015), 19 pages.https://doi.org/10.1145/2827872[12] Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen, and John T. Riedl.2004. Evaluating Collaborative Filtering Recommender Systems.
ACM Trans. Inf.Syst.
22, 1 (Jan. 2004), 5–53. https://doi.org/10.1145/963770.963772[13] David Holtz, Ben Carterette, Praveen Chandar, Zahra Nazari, Henriette Cramer,and Sinan Aral. 2020. The Engagement-Diversity Connection: Evidence froma Field Experiment on Spotify. In
Proceedings of the 21st ACM Conference onEconomics and Computation (Virtual Event, Hungary) (EC ’20) . Association forComputing Machinery, New York, NY, USA, 75–76. https://doi.org/10.1145/3391403.3399532[14] Xiao hua Liu, Yongbin Sun, Ziwei Liu, and Dahua Lin. 2020. Learning DiverseFashion Collocation by Neural Graph Filtering.
ArXiv abs/2003.04888 (2020).[15] Kurt Jacobson, Vidhya Murali, Edward Newett, Brian Whitman, and Romain Yon.2016. Music Personalization at Spotify. In
Proceedings of the 10th ACM Conferenceon Recommender Systems (Boston, Massachusetts, USA) (RecSys ’16) . Associationfor Computing Machinery, New York, NY, USA, 373. https://doi.org/10.1145/2959100.2959120[16] Marius Kaminskas and Derek Bridge. 2016. Diversity, Serendipity, Novelty, andCoverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives inRecommender Systems.
ACM Trans. Interact. Intell. Syst.
7, 1, Article 2 (Dec. 2016),42 pages. https://doi.org/10.1145/2926720[17] Kyung-Min Kim, Donghyun Kwak, Hanock Kwak, Young-Jin Park, SangkwonSim, Jae-Han Cho, Minkyu Kim, Jihun Kwon, Nako Sung, and Jung-Woo Ha. 2019.Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommen-dation. arXiv preprint arXiv:1908.02569 (2019).18] Dawen Liang, Rahul G. Krishnan, Matthew D. Ho man, and Tony Jebara. 2018.Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018World Wide Web Conference (Lyon, France) (WWW ’18) . International WorldWide Web Conferences Steering Committee, Republic and Canton of Geneva,CHE, 689–698. https://doi.org/10.1145/3178876.3186150[19] Rouzbeh Meymandpour and Joseph G. Davis. 2020. Measuring the diversity ofrecommendations: a preference-aware approach for evaluating and adjustingdiversity.
Knowledge and Information Systems
62, 2 (01 Feb 2020), 787–811.https://doi.org/10.1007/s10115-019-01371-0[20] Tomas Mikolov, Kai Chen, Greg Corrado, and Je rey Dean. 2013. E cientEstimation of Word Representations in Vector Space. In , Yoshua Bengio and Yann LeCun (Eds.).http://arxiv.org/abs/1301.3781[21] Tien T. Nguyen, Pik-Mai Hui, F. Maxwell Harper, Loren Terveen, and Joseph A.Konstan. 2014. Exploring the Filter Bubble: The E ect of Using RecommenderSystems on Content Diversity. In Proceedings of the 23rd International Conferenceon World Wide Web (Seoul, Korea) (WWW ’14) . Association for Computing Ma-chinery, New York, NY, USA, 677–686. https://doi.org/10.1145/2566486.2568012[22] Eli Pariser. 2011.
The Filter Bubble: What the Internet Is Hiding from You . PenguinGroup , The.[23] Young-Jin Park, Kyuyong Shin, and Kyung-Min Kim. 2020. Hop Sampling: ASimple Regularized Graph Learning for Non-Stationary Environments. arXivpreprint arXiv:2006.14897 (2020).[24] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: Online Learn-ing of Social Representations. In
Proceedings of the 20th ACM SIGKDD Interna-tional Conference on Knowledge Discovery and Data Mining (New York, New York,USA) (KDD ’14) . Association for Computing Machinery, New York, NY, USA,701–710. https://doi.org/10.1145/2623330.2623732[25] Kyuyong Shin, Young-Jin Park, Kyung-Min Kim, and Sunyoung Kwon. 2020.Multi-Manifold Learning for Large-Scale Targeted Advertising System. arXivpreprint arXiv:2007.02334 (2020). [26] Rianne van den Berg, Thomas N Kipf, and Max Welling. 2017. Graph Convolu-tional Matrix Completion. arXiv preprint arXiv:1706.02263 (2017).[27] Saúl Vargas. 2014. Novelty and Diversity Enhancement and Evaluation in Recom-mender Systems and Information Retrieval. In
Proceedings of the 37th InternationalACM SIGIR Conference on Research Development in Information Retrieval (GoldCoast, Queensland, Australia) (SIGIR ’14) . Association for Computing Machinery,New York, NY, USA, 1281. https://doi.org/10.1145/2600428.2610382[28] Petar Veli č kovi ć , Guillem Cucurull, Arantxa Casanova, Adriana Romero, PietroLiò, and Yoshua Bengio. 2018. Graph Attention Networks. International Con-ference on Learning Representations (2018). https://openreview.net/forum?id=rJXMpikCZ[29] Jesse Vig, Shilad Sen, and John Riedl. 2012. The Tag Genome: Encoding Commu-nity Knowledge to Support Novel Interaction.
ACM Trans. Interact. Intell. Syst.
The World WideWeb Conference (San Francisco, CA, USA) (WWW ’19) . Association for ComputingMachinery, New York, NY, USA, 3307–3313. https://doi.org/10.1145/3308558.3313417[31] Guanjie Zheng, Fuzheng Zhang, Zihan Zheng, Yang Xiang, Nicholas Jing Yuan,Xing Xie, and Zhenhui Li. 2018. DRN: A Deep Reinforcement Learning Frameworkfor News Recommendation. In
Proceedings of the 2018 World Wide Web Conference (Lyon, France) (WWW ’18) . International World Wide Web Conferences SteeringCommittee, Republic and Canton of Geneva, CHE, 167–176. https://doi.org/10.1145/3178876.3185994[32] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, YanghuiYan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep Interest Network for Click-Through Rate Prediction. In
Proceedings of the 24th ACM SIGKDD InternationalConference on Knowledge Discovery Data Mining (London, United Kingdom) (KDD ’18)(KDD ’18)