RewardRating: A Mechanism Design Approach to Improve Rating Systems
RRewardRating : A Mechanism Design Approachto Improve Rating Systems
Iman Vakilinia , Peyman Faizian , and Mohammad Mahdi Khalili University of North Florida, FL, USA [email protected] Florida State University, FL, USA [email protected] University of Delaware, DE, USA [email protected]
Abstract.
Nowadays, rating systems play a crucial role in the attrac-tion of customers for different services. However, as it is difficult to detecta fake rating, attackers can potentially impact the rating’s aggregatedscore unfairly. This malicious behavior can negatively affect users andbusinesses. To overcome this problem, we take a mechanism-design ap-proach to increase the cost of fake ratings while providing incentives forhonest ratings. Our proposed mechanism
RewardRating is inspired by thestock market model in which users can invest in their ratings for servicesand receive a reward based on future ratings. First, we formally model theproblem and discuss budget-balanced and incentive-compatibility spec-ifications. Then, we suggest a profit-sharing scheme to cover the ratingsystem’s requirements. Finally, we analyze the performance of our pro-posed mechanism.
Keywords:
Mechanism Design · Fake Rating · Sybil Attack · ProfitSharing
Recently, online rating systems have become a significant part of potential cus-tomers’ decisions. According to a survey [12], 90% of consumers used the Internetto find a local business in 2019, businesses without 5-star ratings risk losing 12%of their customers, and only 53% of people would consider using a business withless than 4-star ratings. Due to the importance of such ratings, malicious usersattempt to impact the rating of a service by submitting fake scores. For exam-ple, a malicious service owner would submit fake 5-star ratings to increase hisservice’s aggregated rating, as depicted in Figure 1. On the other hand, a ma-licious competitor would submit fake low ratings to subvert rivals’ reputation.Moreover, rating systems are vulnerable to sybil attacks in which a malicious A rating system can be applied to different entities such as a service, product, com-munity, or a business. For the rest of the paper, we refer to such an entity as aservice. a r X i v : . [ c s . G T ] J a n I. Vakilinia et al.
Fig. 1.
An example of sybil attack to rating system entity can forge fake users and utilize them for fake ratings. Such a vulnerabilitycaused the advent of companies offering on-demand fake reviews and ratings [13].Considering the lack of an appropriate safeguard to prevent fake ratings,in this paper we take a novel mechanism-design approach to increase the mali-cious users’ cost for submitting fake ratings while providing incentives for honestraters. Our proposed mechanism
RewardRating , is inspired by the stock mar-ket in which reviewers can invest in their ratings for services and receive a re-ward based on their investments.
RewardRating is equipped with a profit-sharingscheme to satisfy incentive-compatibility and budget-balanced properties as wediscuss in the next sections. The main contributions of this paper are the twoparts, as described below:(1) We propose a new mechanism to disincentivize fake ratings while stimulatinghonest ratings.(2) We investigate a design of an incentive-compatible self-sustained profit shar-ing model to be applied for a rating system.The rest of the paper is organized as follows. In the next section, we reviewrelated works. The system model is presented in section 3. Details of our proposedmechanism is described in section 4. We analyze our proposed mechanism insection 5. Finally, we conclude the paper in section 6.
To support consumers and services, the US Federal Trade Commission (FTC)takes legal actions against fake reviewers [5]. Furthermore, rating platform providerssuch as Amazon, Google, and Yelp have restricted policies for fake reviews andban the incentivized reviews in which service owner provides incentives in returnfor positive reviews [2, 6, 16]. Different safeguards can be implemented to reducethe number of fake reviews. For instance, a review platform can verify the iden-tity of a reviewer before allowing the review submission, and a reviewer shouldhave a valid email-address and phone-number. Furthermore, review platforms ewardRating : A Mechanism Design Approach to Improve Rating Systems 3 can use machine-learning tools to detect and remove fake reviews [17]. Althoughfilters for checking the authenticity of reviews are necessary, studies show thatstill many reviews and ratings are fake, and filters cannot prevent them [3,4]. Onthe other hand, despite the fact that sybil attacks have been widely studied invarious networks [9, 10, 18], such studies especially targeted towards preventing sybil attacks on rating systems are few and far between.Detection of fake reviews using machine-learning techniques have been stud-ied widely in the literature [1, 7, 8, 14, 15]. These methods use linguistic featuresof a review, meta-data associated with a review, and the service history to checkthe validity of reviews. However, the task of detecting fake ratings is more chal-lenging, mainly due to the fact that users might have different preferences orexpectations for a service. For instance, two authentic and independent usersmight receive the same service and truthfully rate it quite differently (1-star vs.5-star). It is difficult to detect if one of these ratings is fake, or they are justsimply referring to different aspects of a service (e.g., quality of food vs. air-conditioning in a restaurant). On the other hand, Monaro et al. [11] studied thedetection of fake 5-star ratings using mouse movements. This study discoveredusers spend more time and wider mouse trajectories to submit false ratings.In contrast with previous works, in this study, we propose a novel mechanismdesign approach to improve the quality of the aggregated ratings for the servicesby increasing the cost for fake ratings while incentivizing honest ratings. Notethat our proposed mechanism’s goal is not to detect fake ratings but to incentivizehonest rating s, as discussed in the following.
Let R = { r , ..., r n } be the strictly totally ordered set which represents ratingscores that reviewers can assign to a service. We indicate a rating r j is higherthan r i ( r i < r j ) if and only if i < j . For example, in google review system, n is 5, and r indicates a 5-star rating which is a higher rating compared to r which represents a 2-star rating. For the sake of simplicity and without lossof generality, we consider n = 5 for the examples we present in the rest of thepaper. Let U = { u , u , ... } represent the set of reviewers who submit ratings fora service. There is no limitation on the number of reviewers. Moreover, a reviewercan submit multiple ratings under different identifiers for the same service. Inother words, the system is vulnerable to the sybil attack. For example, a serviceowner can submit unlimited 5-star ratings for himself, or a competitor can submitunlimited 1-star ratings for a rival service. This can be achieved by recruitingfake reviewers.Reviewers submit ratings to obtain their desired outcomes. We assume thereare three types of reviewers based on their intents: – Attacker - Submits false ratings to change the aggregated rating score tohis preference. For example, a restaurant owner wants to increase his restau-rant’s rating score by submitting fake 5-star ratings.
I. Vakilinia et al. – Honest - Submits the rating truthfully based on his evaluation of a service’squality. – Strategic - Submits the rating to increase his payoff from the system.The main objective of the mechanism is to decrease the number of fakeratings by increasing the cost for attackers . On the other hand, we want toprovide incentives for honest reviewers and motivate strategic players to investin an honest rating. We aim to achieve these goals by making a market wherereviewers invest in their ratings, and the rating system rewards the reviewersbased on future investments.
The main idea of
RewardRating is to create a market for ratings where reviewersinvest in their ratings. This idea is inspired by the stock market. In the stockmarket, investors invest in a business based on their prediction of a business’sperformance in the future when they want to sell their stocks.Mapping this to the rating system, in
RewardRating , reviewers invest in aservice’s performance. To make it more clear, let us continue with an example:
Assume Alice goes to a restaurant, and she is happy about the restaurant’sservice. Alice thinks this restaurant deserves a 5-star rating. However, the currentaggregated rating for this restaurant is 3-star. Using RewardRating mechanism,Alice can invest in a 5-star score for the restaurant’s service. If future reviewersagree with Alice and rate the restaurant higher than 3-star, then Alice has madea successful investment, and as a result, she receives a reward from the system.
RewardRating should satisfy the following requirements: – Budget-Balanced : The system should be self-sustained, as there are noexternal financial subsidies. In other words, the total asset in the systemshould be supported by the reviewers. – Incentive-Compatibility : The system should provide incentives for truth-ful reviewers. On the other hand, the system should increase the cost forattackers.
Specifying requirements, now let’s study the design of a mechanism that sat-isfies these properties. The design objective is to place a set of rules for therating system’s game to meet the aforementioned requirements. A mechanismcan be specified by a game g : M → X where M is the set of possible inputmessages and X is the set of possible outputs of the mechanism. In the ratingsystem model, players are reviewers. A player chooses his strategy to increase ewardRating : A Mechanism Design Approach to Improve Rating Systems 5 his utility. A player’s strategy (i.e., the mechanism’s input), is the rating, cor-responding investment, and the time of investment. Finally, the mechanism’soutput is the aggregated rating of the service and the profit, which is sharedamong stakeholders.In RewardRating , each rating is accompanied by a corresponding stock value.Reviewers can invest in a rating for a service, by buying the stock associatedwith it. Let us use the term coin to represent the smallest unit for such ratingstocks. A new coin is minted for a rating’s stock when a reviewer requests to buysuch a coin from the rating system. The price of buying a coin from the systemis fixed. Users can sell their coins to the rating system. The selling price of acoin is also fixed; however, the price of buying a coin from the system is higherthan the price of selling a coin to the system. The difference in the buying priceand selling price is a fund that is shared among stakeholders as profit.We discuss the details and the reasoning behind the design decisions made,in the next sections. Figures 2 and 3 demonstrate the overall picture of buy-ing/selling coins from/to the rating system.
In this section, we formally define the components of
RewardRating mechanism.Let C = { c , ..., c n } represent the set of n types of coins for ratings available inthe rating system such that c j represents coins for the score r j ∈ R . Let α ∈ R + represent the price of buying a coin from the system. Let x ti,j ∈ R + representthe number of c j coins that user u i owns in the rating system at time t . Let S t = < x ti,j , ..., x tk,l > represent the stakeholders in the rating system at time t .Once a user u i pays α to the rating system to buy a new coin c j ∈ C , the systemmints a new coin, and updates the stakeholders list accordingly.On the other hand, users can sell their coins to the rating system. Let β ∈ R + represent the price that the rating system pays to a user in return for a coin.Then, we have α = β + γ in RewardRating . Here, γ ∈ R + is a profit that thesystem earns from selling a new coin. Such a profit is distributed among stake-holders as a reward. Once a user sells a coin to the system, the system removesthat coin from the corresponding rating stock and updates the stakeholders list.For example, assume u i buys a new coin of 4-star rating with the price of $1(i.e., α = 1). Assume u i decides to sell his coin to the rating system later on,and the price of selling a coin is $0 . β = 0 . u i receives $0 . . γ = 0 . | c tj | represent the number of c j coins minted in the rating system at time t . Let σ t be the aggregated score of a service at time t . The system calculates σ t Note that, here, coins are virtual assets, and minting a new coin means that thesystem adds to the total value of a rating’s stock. I. Vakilinia et al.
Fig. 2.
Buying a coin from the rating system. We have used different colors to demon-strate the difference in coins of ratings’ stocks.
Fig. 3.
Selling a coin to the rating system. We have used different colors to demonstratethe difference in coins of ratings’ stocks. based on the total investments in the rating stocks for a given service as follows: σ t = (cid:80) ∀ c j ∈ C ( | c tj | × j ) (cid:80) ∀ c j ∈ C ( | c tj | ) (1)In other words, the aggregated score of a service is calculated based on thetotal investments on ratings’ stocks. Profit Sharing
RewardRating collects a profit from minting every new coin. This profit isthe difference in the price of buying a new coin by a user from the rating systemand the price of selling that coin to the rating system which can be calculatedas γ = α − β . The mechanism distributes such a profit among stakeholdersstrategically to satisfy the mechanism requirements. The main challenge here is ewardRating : A Mechanism Design Approach to Improve Rating Systems 7 Fig. 4.
Sharing profit among stakeholders to minimize the profit that attackers can earn from the system. As if attackersearn profit from the system, then
RewardRating encourages attackers insteadof honest reviewers. This is a big challenge as it is hard to distinguish betweenhonest users and attackers in the system. To solve this problem, we considerthe fact that if a service’s rating is affected by fake ratings, then the aggregatedrating is biased toward one direction. If attackers submitted fake 5-star ratings,then the aggregated rating is biased toward 5-star, and if attackers submittedfake 1-star ratings, then the aggregated score is biased toward 1-star rating.Considering this fact, the system shares the profit of a new minted coin asfollows: – If the new minted coin is for rates higher than the aggregated score, thiswill in turn increase the aggregated score. In this case, the profit of the newminted coin is shared among those stakeholders holding coins of higher ratesthan the current aggregated rating. – If the new coin is for rates lower than the aggregated score, this will in turndecrease the aggregated score. In this case, the profit of the new minted coinis shared among those stakeholders holding coins of lower rates than thecurrent aggregated rating. – If the new coin matches the aggregated rating, then the aggregated scoredoes not change, and the profit is shared among all of the rating system’sstakeholders. Note that this case is rare, as the coin types (i.e., the optionsfor ratings) does not necessarily match the number of possible values forthe aggregated score. For example, in the google review system, we assumeusers have 5 options for selecting rates (1-star,..., 5-star); however, the ag-gregated rating has 1 decimal point, which makes 50 possible options for theaggregated score.To model such a profit sharing, first, we need to define a set of stakeholderswho earn profit from the system once there is a new investment. Let c j representthe type of a new minted coin at time t . Let W tj ⊆ { , , ..., n } represents the setof indices of ratings’ stocks which their stakeholders are rewarded for the new I. Vakilinia et al. minted coin c j . In other words, a user does not receive a reward if he does notown a rating coin in the set of W tj when a new coin c j is minted at time t . Then,we model W tj as: W tj = { i ∈ N : σ t < i ≤ n } j > σ t { i ∈ N : 1 ≤ i < σ t } j < σ t { i ∈ N : 1 ≤ i ≤ n } j = σ t (2) RewardRating distributes the profit in a way that stakeholders are earningmore profit if they have the rating coins closer to the rating of the new mintedcoin. In other words, with the growth of distance between j and w ∈ W tj , theprofit of a stakeholder of a coin c w is decreasing. To model this, let f ( w, j ) ∈ R + represent a function such that ∂f ( w,j ) ∂ ( | w − j | ) <
0, which indicates that with theincrease of distance between w and j , the value of f ( w, j ) is decreasing. Functions f ( w, j ) and f ( w, j ) are two candidates for f ( w, j ): f ( w, j ) = 2 − ( | w − j | +1) (3) f ( w, j ) = (2 + | w − j | ) − (4)Figure 5 shows the profit sharing of a new coin using f ( w, j ) and f ( w, j ) Fig. 5.
Profit sharing sample functions candidate functions. As can be seen, with the increase of the distance of a newcoin’s rating and the rating of stakeholder’s coin, the share of profit is decreasing.Let p tw represent the profit a user earns from staking a coin c w at time t .Then RewardRating calculates p tw as follows: p tw = γ × f ( w, j ) (cid:80) ∀ q ∈W tj ( (cid:80) ∀ u k ∈U x tk,q × f ( q, j )) (5) ewardRating : A Mechanism Design Approach to Improve Rating Systems 9 To keep the system budget-balanced and simple, we assume there is a defineddecimal point for the receiving profit, and the remainder of profit is received bythe rating system’s owner. For example, we can set 2 decimal points for thereward; in this case, if we need to divide $1 to three stakeholders with the equalshare, each of stakeholders receives $0 .
33, and the rating system’s owner receives$0 .
01 as profit.This design provides incentives for the reviewers who correctly predict thefuture investments in ratings same as stock market. We analyze the benefits ofsuch a profit sharing model in the next section. The following example is givento clarify the profit sharing scheme.
Assume
RewardRating ’s parameters have been set for a restaurant as α = 2 , β =1 , γ = 1 , n = 5, and f ( w, j ) = 2 − ( | w − j | +1) . The profit of selling the first coinsare given to the rating system. Reviewers buy coins based on their prediction offuture reviewers’ investments. Assume at time t , we have | c | = 4 , | c | = 4 , | c | =2 , | c | = 1, and | c | = 0. In this case, the aggregated score is: σ t = (4 ×
1) + (4 ×
2) + (3 ×
2) + (1 ×
4) + (0 × σ t =2 .
00. The restaurant owner is malicious and wants to improve the restaurant’srating, therefore, he buys a c coin. For the purchase of a c coin, 1 profit is sharedamong stakeholders of coins c , c , and c following definition 2. Therefore, theowner of c coin receives 0 .
5, and the owner of a c coin receives 0 .
25 as rewardfollowing equation 5. Then, the aggregated score updates to σ t (cid:48) = 2 .
25. Later on,an honest user predicts that the aggregated score of the service will be decreasedand she buys a c coin. At this point, the reward is shared among stakeholders ofcoins c , and c following definition 2. In this case, the owner of a c coin receives0 .
16 and the owner of a c coin receives 0 .
08 and the rating system receives the$0 .
02 profit. Then, the aggregated score updates to σ t (cid:48)(cid:48) = 2 . In this section, we analyze
RewardRating . We check budget-balanced and incentive-compatibility features. Afterward, we investigate the increase of attackers’ cost.Finally, we discuss the limitations.
Proposition 1.
RewardRating satisfies the budget-balanced property.
Proof.
We need to show that the total input assets into the system is equal tothe total output. Input assets to the system are total coins the system sells tousers, and total output is the money that the system pays to the users to buytheir coins in addition to the profit, which is shared among users and the reward system. For every coin, we have α = β + γ ; if we show that the total profit sharedamong all of the stakeholders for all of their coins is equal to γ , then we canconclude that the system is budget-balanced. Note that, the profit of selling thefirst coin is received by the reward system. For every new minted coin c j ∈ C ,the total profit which is shared among stakeholders is as follows: (cid:88) ∀ w ∈W tj ( (cid:88) ∀ u l ∈U x tl,w × p tw ) = (cid:88) ∀ w ∈W tj ( (cid:88) ∀ u l ∈U x tl,w × γ × f ( w, j ) (cid:80) ∀ q ∈W tj ( (cid:80) ∀ u k ∈U x tk,q × f ( q, j )) )= γ × ( (cid:80) ∀ w ∈W tj ( (cid:80) ∀ u l ∈U x tl,w × f ( w, j )) (cid:80) ∀ q ∈W tj ( (cid:80) ∀ u k ∈U x tk,q × f ( q, j )) ) = γ For incentive-compatibility, we need to show that
RewardRating provides incen-tives for honest reviewers and increases the cost for attackers. Same as the stockmarket, in
RewardRating , players’ profits depend on how well they can predictthe other investors’ decisions in the future. This is due to the fact that the profitthat a player earns from
RewardRating depends on the future investment in therating system.More specifically, the
RewardRating game can be classified as follows: – Sequential : As players choose their actions consecutively. – Perfect : As players are aware of the previous players’ actions. – Non-cooperative : As players compete to earn more profit. – Incomplete-information : As players do not have complete knowledge aboutthe number of players and future ratings.As the number of players, their types (i.e., honest , attackers , and strategic ), andtheir rating strategies are unknown, this game is classified as an Incomplete-information game. Therefore, we analyze the strategic player’s best responsestrategy considering different estimations for future investments.
Proposition 2.
RewardRating satisfies the incentive-compatibility property.
Proof.
As there are three type of players as honest , attackers , and strategic inour system model, we sketch the proof by analyzing the payoff for each typeof players. For honest users, RewardRating makes profit as long as other hon-est users participate in the rating process. This is due to the fact that honestusers are rewarded by the new honest ratings. On the other hand,
RewardRat-ing increases the cost of attack as attackers should buy coins from the system.However, attackers do not gain any profit from honest users. This is becausewhen attackers invest in fake scores, the aggregated score is changed toward alower or a higher score. As a result, honest reviewers will invest in the oppo-site direction, and attackers do not profit from such investments based on theprofit-sharing model. Finally, for strategic users, the best response strategy is toinvest in a coin in which there will be more profit in the future. Therefore, as ewardRating : A Mechanism Design Approach to Improve Rating Systems 11 long as the profit earned from choosing the honest rating is higher than otherratings, the best response strategy is to honestly rate the service. In this case,the incentive-compatibility requirement is met.Now, assume the strategic user estimates that the profit of investing in a fakerating is higher than an honest rating. In this case, a strategic user would investin the attacker’s side to achieve more profit. However, the profit that the systemearns from selling coins to attackers will be shared among strategic users as well,following the profit-sharing model, which means that the system increases thecost for attackers. In this case, attackers endure more cost. On the other hand,this does not negatively affect honest users as they receive their profit from futurehonest users. Such a profit is not shared among attackers and strategic users whochose the attacker’s side. Therefore, although the valid estimation of the aggre-gated rating is not fulfilled, the system still satisfies the incentive-compatibilityfeature while causing extra cost for the attacker. Therefore, we conclude thatthe proposed mechanism satisfies the incentive-compatibility feature.
In this section, we analyze the cost for an attacker to increase the aggregatedrating. In this experiment, the initial honest aggregated score is set to 1, withthree different settings of having 100, 200, and 500 honest rating coins. We as-sume an attacker invests in the highest possible rating, which is 5 in our example.Figure 6 depicts the cost of an attacker for increasing the aggregated rating. Ascan be seen, when an attacker wants to increase the aggregated rating by invest-ing in the highest possible rating, the cost for an attacker grows exponentially.Therefore,
RewardRating ’s is more efficient for the services with a higher numberof reviewers. This is due to the fact that in this case an attacker should investin more to compensate the impact of honest ratings.
Fig. 6.
Cost for attacker to increase the aggregated score of 1 with 100, 200, and 500honest coins2 I. Vakilinia et al.
Although
RewardRating can potentially increase the cost for malicious rateswhile stimulating honest rates, there are two main obstacles which can precludeits adoption for micro-services. First,
RewardRating requires reviewers to investin ratings, and the aggregated rating is calculated based on the amount of in-vestments in ratings. In this case, reviewers who do not want to invest in ratingscannot participate in the rating review process. Another concern is the rate ofprofit reviewers can earn from future investments. As the number of stakeholdersincreases, the profit earned from each new investment decreases. Thus, strategicusers will not participate in the rating process when the profit they can earn isless than what they can earn in other markets. On the other hand, new honestusers are reluctant to invest in ratings as the amount of profit that they canmake is not decent. One solution to overcome this problem is to set a definedvalue for the total profit that can be earned from staking a coin. Once thisprofit is achieved, the system automatically buys the coin from the correspond-ing stakeholder. In this case, the aggregated score should be calculated based onthe number of minted coins in a specified time window as the number of coins islimited, and all of the ratings can reach out to a maximum number. By addingthis feature, the total number of coins will be fixed, and as a result, the profitthat a stakeholder can earn from a new investment is stabilized. Moreover, theratings will be more dynamic as stocks are updated more quickly.Moreover, it is worth mentioning that, although
RewardRating can increasethe cost for attackers, it cannot prevent the fake ratings. Therefore, other coun-termeasures such as strong authentication mechanism should be applied to re-duce the fake ratings. In the case that the rating system’s policy limits one ratingfor each user, then a reviewer should not be able to purchase more than one coinfrom the system.
In this paper, we have studied the challenge of designing a mechanism to pre-clude fake ratings while incentivizing honest ratings. First, we formally modeledthe requirements for having a market for the rating system. Then, we proposed
RewardRating to increase the cost for attackers while providing incentives forhonest users. Our analysis shows that
RewardRating satisfies budget-balancedand incentive-compatibility requirements. Our proposed mechanism can poten-tially increase the cost for malicious raters while stimulating honest rates. Forfuture work, we plan to implement
RewardRating using smart-contract technol-ogy. This platform will be equipped with a browser add-on where users can usetheir crypto-currency assets to invest in the ratings of a service. Using smart-contract, no trusted third party is required to handle the rating scores and theircorresponding assets. ewardRating : A Mechanism Design Approach to Improve Rating Systems 13
References
1. Ahmed, H., Traore, I., Saad, S.: Detecting opinion spams and fake news using textclassification. Security and Privacy (1), e9 (2018)2. Amazon: Anti-manipulation policy for customer reviews,
3. Birchall, G.: One in three tripadvisor reviews are fake, with venues buy-ing glowing reviews, investigation finds,
4. Crockett, Z.: 5-star phonies: Inside the fake amazon review complex (2019), https://thehustle.co/amazon-fake-reviews
5. FTC: Ftc brings first case challenging fake paid reviews on an indepen-dent retail website,
6. Google: Prohibited and restricted content, https://support.google.com/local-guides/answer/7400114?hl=en
7. Hajek, P., Barushka, A., Munk, M.: Fake consumer review detection using deepneural networks integrating word embeddings and emotion mining. Neural Com-puting and Applications pp. 1–16 (2020)8. Heydari, A., ali Tavakoli, M., Salim, N., Heydari, Z.: Detection of review spam: Asurvey. Expert Systems with Applications (7), 3634–3642 (2015)9. Kumar, B., Bhuyan, B.: Game theoretical defense mechanism against reputationbased sybil attacks. Procedia Computer Science , 2465–2477 (2020)10. Levine, B.N., Shields, C., Margolin, N.B.: A survey of solutions to the sybil attack.University of Massachusetts Amherst, Amherst, MA , 224 (2006)11. Monaro, M., Cannonito, E., Gamberini, L., Sartori, G.: Spotting faked 5 starsratings in e-commerce using mouse dynamics. Computers in Human Behavior p.106348 (2020)12. Murphy, R.: Local consumer review survey (2019),
13. Streitfeld, D.: Give yourself 5 stars? online, it might costyou (2013),
14. Wu, Y., Ngai, E.W., Wu, P., Wu, C.: Fake online reviews: Literature review, syn-thesis, and directions for future research. Decision Support Systems p. 113280(2020)15. Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.: Automated crowdturfingattacks and defenses in online review systems. In: Proceedings of the 2017 ACMSIGSAC Conference on Computer and Communications Security. pp. 1143–1158(2017)16. Yelp: Content guidelines,
17. Yelp: Yelp’s recommendation software explained, https://blog.yelp.com/2010/03/yelp-review-filter-explained
18. Yu, H.: Sybil defenses via social networks: a tutorial and survey. ACM SIGACTNews42