SybilFence: Improving Social-Graph-Based Sybil Defenses with User Negative Feedback
aa r X i v : . [ c s . S I] A p r SybilFence: Improving Social-Graph-Based Sybil Defenseswith User Negative Feedback
Qiang Cao Xiaowei YangDuke University { qiangcao, xwy } @cs.duke.edu ABSTRACT
Detecting and suspending fake accounts (Sybils) in onlinesocial networking (OSN) services protects both OSN op-erators and OSN users from illegal exploitation. Existingsocial-graph-based defense schemes effectively bound theaccepted Sybils to the total number of social connectionsbetween Sybils and non-Sybil users. However, Sybils maystill evade the defenses by soliciting many social connec-tions to real users. We propose SybilFence, a system thatimproves over social-graph-based Sybil defenses to furtherthwart Sybils. SybilFence is based on the observation thateven well-maintained fake accounts inevitably receive a sig-nificant number of user negative feedback, such as the rejec-tions to their friend requests. Our key idea is to discount thesocial edges on users that have received negative feedback,thereby limiting the impact of Sybils’ social edges. The pre-liminary simulation results show that our proposal is moreresilient to attacks where fake accounts continuously solicitsocial connections over time.
1. INTRODUCTION
The popularity of online social networking (OSN) ser-vices such as Facebook and LinkedIn has attracted attacksand exploitation. In particular, OSNs are vulnerable to Sybilattacks, where attackers create many fake accounts, called
Sybils , to send spam [10], manipulate online voting [17],crawl users’ personal information [6], etc.There has been several proposals that leverage the under-lying social graph to defend against Sybils [7, 8, 17, 18, 20,21]. The social-graph-based Sybil defenses are proactive ap-proaches, as Sybils can be uncovered before they interactwith real users. Those proposals have been extensively dis-cussed in the research community due to their simplicity andreliability. The social-graph-based Sybil defenses rely on anassumption that the social edges connecting Sybils and non-Sybil users, called attack edges , are strictly limited. Most ofthem bound the undetectable Sybils, called accepted Sybils ,to the number of attack edges [7, 20], i.e. O (log n ) Sybilsper attack edge.Although social-graph-based Sybil defenses are able toprovide theoretical guarantees on accepted Sybils, the up-per bound of accepted Sybils still depends on the total num- ber of attack edges. Therefore, Sybils have the incentiveto solicit for social connections from real users to increasethe attack edges and evade the detection. Furthermore, well-maintained Sybils can choose to continuously solicit socialedges, at a speed similar to real users. As a result, they maybe able to accumulate many attack edges from promiscuousreal users , who are open to befriending even strangers. Withcurrent social-graph-based Sybil defenses, the fake accountsbehind those well-maintained Sybils are indistinguishablefrom non-Sybil users, because this entire set of Sybils hasadequate connectivity to non-Sybil users.Fortunately, we observe that the attack edges from Sybilsare usually accompanied by the negative feedback from cau-tious real users , who are resistant to abusive communication.A negative feedback can be a rejection to a friend request ora report on receiving unwanted communication. We havebeen conducting a study on live fake Facebook accounts inthe wild ( § defense graph with reduced weights on attack edges. In this paper, weuse SybilRank, a state-of-the-art social-graph-based Sybil1efense scheme [7], as a proof of concept, and adapt it tothe weighted defense graph.As shown in simulations ( § ∼
20% in termof the probability of ranking non-Sybil users higher thanSybils. SybilFence is shown to be more resilient to attackswhere well-maintained Sybils keep soliciting for social con-nections over time. We conjecture that SybilFence can alsoimprove other Sybil detection schemes such as SybilLimitand Sybil tolerance schemes such as Bazaar [16].
2. SYBIL ACTIVITIES AND NEGATIVEFEEDBACK
At a high level, the user negative feedback must be trig-gered by abusive activities in OSNs, and reflect the user dis-trust to the executor of the abuse. In practice, the user neg-ative feedback includes rejections to friend requests, flags oninappropriate incoming communication such as spam, phish-ing, pornography and extreme violence, etc. We observe thatsuch negative feedback has already existed in OSNs. OSNslike Facebook have collected and stored such user negativefeedback, although some of them may be currently under-utilized or ignored by OSN operators. Therefore, we do notintroduce any change to current OSNs’ use model, and ourproposal is completely transparent to users.We next take the negative feedback triggered by abusivelybefriending as an example and demonstrate why the negativefeedback occurs and how the negative feedback associates toSybil activities.
Befriending real users is the first step for fake accounts toinfiltrate an OSN after their creation. During this stage, fakeaccounts attempt to establish many social connections to realusers. However, aggressively befriending strangers may trig-ger negative feedback from resistant users. For instance,a bilateral social connection requires reciprocal agreementsfrom both users. The negative feedback during befriendingcan be a rejected or an ignored friend request.
Study on fake Facebook accounts in black market.
Tobetter understand the rejection to fake accounts’ friend re-quests in real world, we have been conducting a study onfake Facebook accounts in black market. The fake accountsin black market is an example of the well-maintained livefake accounts in the wild. Those fake accounts are pricedbased on the ages, the number of friends, the number ofpictures, etc. According to our purchase experience, a fakeaccount with 50 ∼
100 friends costs $2 ∼ $6 at differentvendors. Those fake accounts look real and their friendsalso have rich content in profiles, walls, etc. We have pur-chased accounts from different vendors via Freelancer [2]and BlackHatWorld [1]. Apart from pictures, emails, secu-rity Q/A sets, we explicitly require in our purchases that theaccounts should have “ >
50 real US friends”. The vendorsalways ask a relatively long time period to deliver accounts
Figure 1:
A sample of purchased accounts. F r i end s and pend i ng s PendingsFriends
Figure 2:
Friends and pending friend requests on purchased ac-counts. after we order, e.g., one week or one month. Our work isstill in progress for a large-scale study on live fake accountsin black market.
Fake accounts look real.
In this study, we use 12 fake ac-counts with 837 total friends. Those accounts are purchasedat different vendors. All of them are at least one year old.Figure 1 is the profile of a sample account. The profile iscrafted as a college student with pictures, and with posts onthe wall. This account has 84 friends and has interacted withsome friends, such as messages and comments.
Fake accounts receive rejections.
In Facebook, a pendingrequest implies a rejection, as Facebook does not providethe rejection option. Therefore, we examine the pendingfriend requests on each account. Facebook does not providethis statistic to users directly, but provides APIs to accessthe pending friend requests. Figure 2 shows the number offriends and pending requests on each account. As we cansee, although those well-maintained accounts have many so-cial connections to users that may be real, each of them stillhas a significant number of pending requests.This result also indicates that the friend requests fromthose accounts have an acceptance rate > Fake accounts can befriend real users.
To estimate how2any real users the fake accounts have befriended, we choose2 accounts and send a message to each of their friends. In themessage, we inform the friends that the account is fake, sug-gest them to disconnect the social connection, and send us amessage back if they established the connection by mistake.As a result, we have sent out the message to 174 friends.Within 48 hours, we received 6 messages to clarify that theconnection is mistakenly established. Also, we observed asignificant decrease in the friends of the test accounts. In to-tal, 16 friends out of 174 disconnect the connections to thefake accounts. This number is a conservative estimation ofthe real users that those fake accounts have connected to,because some real users may have simply ignored our mes-sages.
Non-manipulability of negative feedback.
In our pro-posal, a user is able to signify a negative feedback only if shereceived unwanted communication such as an unexpectedfriend request, a spam message, or a spam post on her wall.This means that a negative feedback can be generated onlyif the user has been directly annoyed or harmed. This isin contrast to the negative ratings in online services suchas YouTube and Flickr, where users are granted to rate ar-bitrarily based on their preference. In our proposal, a realuser will not receive negative feedback if she never sendsout unwanted communication. We choose to use the abuse-triggered negative feedback because it is non-manipulableunder collusion. Without the trigger of abusive activities, agroup of malicious users cannot collude to render arbitrarilynegative feedback to a victim.
Why not use negative feedback to directly detect Sybils?
The negative feedback can be used to directly detect Sybilswith machine-learning (ML) techniques. However, ML-basedtechniques [19] require extensive calibration efforts due tothe abundance of possible legitimate and malicious behav-iors in OSNs. More importantly, these approaches are basedon individual user features, and the resulting alarms or alertsare only applicable to individual users. Therefore, such tech-niques may miss the Sybils behind the active entrance Sybilsor currently silent entrance Sybils. Instead, SybilFence em-ploys social-graph-based schemes and considers Sybils asgroups. By aggregating negative feedback, SybilFence isable to leverage the aggressive behaviors of the entranceSybils to uncover a much larger set of Sybils behind them.
3. SYSTEM DESIGN
In the previous section, we discussed what is the user neg-ative feedback and how would user negative feedback occur.We now discuss how to incorporate user negative feedbackinto social-graph-based Sybil defenses. We first introduceour system model and threat model.
System model.
We model an OSN with two graphs: 1)the underlying social graph as an undirected graph G + = Figure 3:
The social graph and the negative feedback graph inan OSN. ( V, E + ) , where V is the user set in the OSN, and E + rep-resents the social relationships among users; and 2) the neg-ative feedback graph as a directed graph G − = ( V, E − ) ,where V is the same user set as in G + , E − is the directededge set that includes the negative feedback between users.Given a direction between each pair of users, we consider nomore than one negative feedback edge. Figure 3 shows boththe social graph and the negative feedback graph sharing thesame user set in an OSN. A node v has a social degree of deg + ( v ) in G + and an in-degree of deg − ( v ) in G − . Threat model.
Malicious users may launch Sybil attacks bycreating many fake accounts. We divide the user set V intotwo disjoint subsets: non-Sybil users and Sybils, as shown inFigure 3. The non-Sybil region is the collection of the non-Sybil users, and the social edges and negative feedback linksamong them. Similarly, we define the Sybil region with re-spect to the Sybils. We refer to the social edges between thenon-Sybil region and the Sybil region as attack edges . Werefer to the Sybils adjacent to the attack edges as entranceSybils , and the other Sybils as latent Sybils . SybilFence aims to improve the social-graph-based Sybildefenses using the negative feedback. Our observation isthat the Sybils’ attack edges are always accompanied by neg-ative feedback. If we discount the trustworthiness of the so-cial edges that come with the negative feedback, we can limitthe impact of the excessive attack edges on well-maintainedentrance Sybils, and enable social-graph-based Sybil defensesto uncover the Sybils behind them.SybilFence comprises of two major modules: 1) a neg-ative feedback combiner , which incorporates the negativefeedback graph into the social graph, and generates a de-fense graph with discounted social edges on the users thathave received negative feedbacks; and 2) an adopted social-graph-based defense scheme that detects Sybils on the de-fense graph with improved accuracy.
The social-graph-based Sybil defenses bound the acceptedSybil to the number of attack edges [7,20], regardless of how3uch negative feedback that Sybils have received. Sybil-Fence improves over social-graph-based Sybil defenses byreducing the impact of attack edge through the collected usernegative feedback. In principle, we aim to build a weighted defense graph based on the social graph and the negativefeedback graph. We reduce the weights of social edges onthe users that have received negative feedback, such that theaggregate weight on attack edges is substantially limited.
Defense graph.
Our key idea is to discount a user’s so-cial relationships with the negative feedback that the userhas received. By cancelling out the social edges that comealong with negative feedback from other users, we are ableto mitigate the impact of the entrance Sybils’ attack edges.We define the net social degree of a node v as net ( v ) = deg + ( v ) − α × deg − ( v ) (we require net ( v ) ≥ ), where theoffset factor α is a positive parameter. A large α indicates asubstantial penalty of an incoming negative feedback edge.Thus, a node whose social edges accompanied by negativefeedback has a net social degree net ( v ) smaller than its so-cial degree deg + ( v ) .We define the weight of a node v as its net social degreedivided by its social degree: w ( v ) = net ( v ) deg + ( v ) . The nodeweight is essentially the discount rate of a node. With thisdefinition, a real user that never receives negative feedbackhas a weight of value , while a user with received nega-tive feedback is assigned a discounted weight due to the dis-counted social degree. A node weight can be translated tothe extent to which the node can be trusted.We derive the weight of a social edge based on the weightsof its adjacent nodes: w ( u, v ) = min( w ( u ) , w ( v )) . Theweight of an edge is determined by the lowest weight of itsadjacent nodes. This is because any of the adjacent nodesthat has triggered negative feedback should discount the edgequality. Therefore, an edge weight is always no more than , and low weighted social edges are always adjacent to in-coming negative feedback links.Therefore, we can build a weighted and undirected de-fense graph G = ( V, E + , w ) . We weight each edge ( u, v ) with w ( u, v ) that discounts the quality of social edges. Ascompared to the social graph, where every social edge istreated equally, our defense graph enforces strictly limitedaggregate weight on attack edges using negative feedback. In our initial design, we take SybilRank [7] as a proof ofconcept, and adapt it to the weighted defense graph. Sybil-Rank comprises of three steps: trust propagation, trust nor-malization, and ranking. We adapt SybilRank to use the de-fense graph as below.In the first stage, SybilRank propagates trust from the trustseeds via O (log | V | ) power iterations. In each iteration, thedistributed trust on each edge is proportional to the edgeweight. T ( i ) ( v ) is the amount of trust on node v in the i th iteration. We assume that the non-Sybil region part of thedefense graph is well connected after social edges discount- ing. We empirically validate it with simulations ( § Algorithm
Adapted SybilRank ( G ( V, E + , W ) , seedSet S ) Stage I: O (log | V | ) -step trust propagationIn initialization, seed trust evenly in trust seeds;In a step i ( < i ≤ h , h = O ( log | V | ) ), node u updates its trust as below: T ( i ) ( u ) = P ( u,v ) ∈ E + T ( i − ( v ) w ( u,v ) P ( k,v ) ∈ E + w ( k,v ) Stage II:
Normalize node trust by social degree ˆ T u = T ( h ) u deg + ( u ) Stage III:
Rank users based on theirdegree-normalized trust ˆ T Return ranked list L
4. SIMULATION STUDY
To gain a better understanding on how our initial designimproves the detection accuracy, we evaluate SybilFence incomparison to the original SybilRank. We simulate userfriend requests on social graphs, and use the request rejec-tions as negative feedback.
We simulate Sybil attacks in four social graphs (Table 1).The Facebook graph is sampled via the “forest fire” sam-pling method [14]. The synthetic graph is generated basedon the scale-free model [4]. We connect a Sybil region thatconsists of 5,000 Sybils to each social graph. We simulatethe social connections among Sybils by establishing socialedges from each Sybils to another 5 random Sybils upon itsarrival.
Social Nodes Edges Clustering DiameterNetwork CoefficientFacebook ,
000 40 ,
013 0 . ca-AstroPh [3] ,
772 198 ,
080 0 . ca-HepTh [3] ,
877 25 ,
985 0 . Synthetic ,
000 39 ,
399 0 . Table 1:
Social graphs used in our simulation.
Similar to [7] and [18], we use the metric the area underthe Receiver Operating Characteristic (ROC) curve [12] tocompare the quality of the ranking that social-graph-basedschemes use to uncover Sybils. The area under the ROCcurve measures the probability that a Sybil user is rankedlower than a random non-Sybil user. It ranges from 0 to 1.4 A r ea unde r t he R O C C u r v e ImprovedOriginal
Figure 4:
Impact of the offset factor of negative feedback.
Users can send friend requests to others. A request ac-ceptance yields a social edge, while a rejection produces anegative feedback edge.
Rejections to Sybil users.
We simulate the process that theentrance Sybils solicit social edges from non-Sybil users. Inthe Sybil region, we designate 200 nodes as entrance Sybils,and the rest 4800 nodes as latent Sybils. The entrance Sybilsrepresent well-maintained fake accounts, which continuouslysend friend requests and have lower rejection rates than la-tent Sybils due to the better maintenance. By default we setthe rejection rate of entrance Sybils by non-Sybil users to60%, and the rejection rate of latent Sybils to 98%.
Rejections to non-Sybil users.
We simulate rejections tonon-Sybil users based on the social graph. In particular,given a rejection rate to non-Sybil users and the numberof friends that a non-Sybil user has in the social graph, wecan infer the number of rejections on this user. we then addthis number of rejections to the non-Sybil user by randomlyselecting non-friend users and simulating a rejection fromeach of them. We set the rejection rate of non-Sybil usersto 1%. We study how SybilFence’s performance varies withthe change of the rejection rates in § We now present the simulation results in the Facebookgraph. The results on other graphs are similar (see Appendix).
Impact of the negative-feedback offset factor.
The offsetfactor α ( § can leverage the rejections to cancel out the attack edges onentrance Sybils. However, the entrance Sybils also have so-cial edges from Sybils. Therefore, a larger offset factor canyield further improvement. Figure 4 shows that the improve-ment keep increases until the offset factor reaches a suffi-cient large value, i.e., 3.0. With this value, SybilFence isable to cancel out most of the entrance Sybils’ social edgesfrom both non-Sybil users and Sybils. A r ea unde r t he R O C C u r v e ImprovedOriginal
Figure 5:
Resilience to the number of requests from entranceSybils. A r ea unde r t he R O C C u r v e ImprovedOriginal
Figure 6:
Detection accuracy as a function of the rejection rateto the Sybils’ requests.
Resilience to Sybils’ flooding requests.
The fake accountscan solicit social edges by flooding friend requests. As aresult, the attack edges keep increasing. We study the Sybil-Fence’s resilience to request flooding. We set the offset fac-tor to 1 and vary the number of requests that each entranceSybil sends from 4 to 36. Each latent Sybil is set to send 2 re-quests to random non-Sybil users. Consequently, the attackedges increase from ∼
500 to ∼ Impact of the rejection rate to Sybils’ requests.
At a highlevel, both SybilFence and social-graph-based schemes relyon non-Sybil users to defense against Sybils. We investigatethe impact of the rejection rate to Sybils’ requests. In thissimulation, each entrance Sybil sends 25 requests to randomnon-Sybil users. We vary the rejection rate to these requestsfrom 0.5 to 0.95. The number of attack edges decreasesaccordingly from ∼ ∼ Rejections among non-Sybil users.
SybilFence is basedon an assumption that non-Sybil users are less likely to re-ceive negative back from others than Sybils. We study Sybil-Fence’s performance with a varying rejection rate to non-Sybils’ requests. We simulate the rejections among non-5 A r ea unde r t he R O C C u r v e ImprovedOriginal
Figure 7:
Detection accuracy as a function of the rejection rateto non-Sybil users’ requests.
Sybil users as decribed in §
5. RELATED WORK
This work is mainly related to social-graph-based Sybildefenses [7, 8, 17, 18, 20]. These proposals rely on socialgraph properties to distinguish Sybils from non-Sybil users,i.e., the attack edges are strictly limited. Existing schemesbound the accepted Sybils to the number of attack edges.Thus, fake accounts benefit from soliciting social connec-tions. SybilFence improves over existing Sybil defenses byleveraging negative feedback from users. It can be used tofurther uncover the fake accounts that have obtained socialedges to real users but inevitably received negative feedbackfrom resistant users.There have been proposals to propagate distrust in so-cial graphs [5, 9, 11, 13, 22]. However, those approaches areproposed as general techniques for social network analysis,but not targeting Sybil defense. Furthermore, the distrust inthose proposals is not securely defined, which can includearbitrarily negative information and is not resilient to usercollusion.
6. CONCLUSION AND FUTURE WORK
The detection of fake accounts in OSNs has been increas-ingly urgent as both OSN operators and users have been suf-fering from illegal exploitation. We observe that even well-maintained fake accounts inevitably receive negative feed-back from others, as the controllers only have limited knowl-edge on users’ security awareness. Thus, user negative feed-back can be used to strengthen existing Sybil defenses. Wepropose SybilFence, which incorporates user negative feed-back into social-graph-based Sybil defenses. Fake accountscan evade SybilFence only if they can connect to real users,and meanwhile receive little negative feedback from others.Therefore, SybilFence advances the Sybil defenses by rais-ing the cost for Sybils to evade detection.In future work, we plan to continue our study on the livefake accounts in black market to further quantify the neg-ative feedback they receive. With this study, we plan tocomplete and extend the our preliminary SybilFence design.We can then implement the complete SybilFence system andprobably deploy it in real OSNs.
7. REFERENCES [1] BlackHatWorld. .[2] Freelancer. .[3] Stanford large network dataset collection. http://snap.stanford.edu/data/index.html .[4] A.-L. B´arab´asi and R. Albert. Emergence of Scaling in Random Networks.
Science , 286:509–512, 1999.[5] C. Borgs, J. Chayes, A. T. Kalai, A. Malekian, and M. Tennenholtz. A NovelApproach to Propagating Distrust. In
WINE , 2010.[6] Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu. The SocialbotNetwork: When Bots Socialize for Fame and Money. In
ACSAC , 2011.[7] Q. Cao, M. Sirivianos, X. Yang, and T. Pregueiro. Aiding the Detection of FakeAccounts in Large Scale Social Online Services. In
NSDI , 2012.[8] G. Danezis and P. Mittal. SybilInfer: Detecting Sybil Nodes using SocialNetworks. In
NDSS , 2009.[9] C. de Kerchove and P. van Dooren. The PageTrust Algorithm: How to RankWeb Pages When Negative Links are Allowed? In
SDM . SIAM, 2008.[10] H. Gao, J. Hu, C. Wilson, Z. Li, Y. Chen, and B. Y. Zhao. Detecting andCharacterizing Social Spam Campaigns. In
IMC , 2010.[11] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins. Propagation of Trust andDistrust. In
WWW , 2004.[12] J. A. Hanley and B. J. McNeil. The Meaning and Use of the Area under aReceiver Operating Characteristic (ROC) Curve.
Radiology , 143, 1982.[13] J. Kunegis, A. Lommatzsch, and C. Bauckhage. The Slashdot Zoo: Mining aSocial Network with Negative Edges. WWW, 2009.[14] J. Leskovec and C. Faloutsos. Sampling from Large Graphs. In
SIGKDD , 2006.[15] S. Panagiotopoulos, Q. Cao, M. Sirivianos, A. Stavrou, C. Liang, and X. Yang.Quantifying the Cost of Sybil Attacks in Online Social Networks. http://tinyurl.com/quantifying-sybil-cost , 2011.[16] A. Post, V. Shah, and A. Mislove. Bazaar: Strengthening user reputations inonline marketplaces. In
NSDI , 2011.[17] D. N. Tran, B. Min, J. Li, and L. Subramanian. Sybil-Resilient Online ContentRating. In
NSDI , 2009.[18] B. Viswanath, A. Post, K. P. Gummadi, and A. Mislove. An Analysis of SocialNetwork-based Sybil Defenses. In
ACM SIGCOMM , 2010.[19] Z. Yang, C. Wilson, X. Wang, T. Gao, B. Y. Zhao, and Y. Dai. UncoveringSocial Network Sybils in the Wild. In
IMC , 2011.[20] H. Yu, P. Gibbons, M. Kaminsky, and F. Xiao. SybilLimit: A Near-OptimalSocial Network Defense Against Sybil Attacks. In
IEEE S&P , 2008.[21] H. Yu, M. Kaminsky, P. B. Gibbons, and A. Flaxman. SybilGuard: DefendingAgainst Sybil Attacks via Social Networks. In
SIGCOMM , 2006.[22] C.-N. Ziegler and G. Lausen. Propagation Models for Trust and Distrust inSocial Networks.
Information Systems Frontiers , 7, 2005.
APPENDIX
Figure 8, Figure 9, and Figure 10 show the simulation re-sults in the graphs ca-AstroPh and ca-HepTh, and the syn-thetic graph (Table 1). In these graphs, SybilFence achievessimilar improvement over SybilRank as described in § A r ea unde r t he R O C C u r v e ImprovedOriginal (a) Offset factor A r ea unde r t he R O C C u r v e ImprovedOriginal (b) Flooding requests A r ea unde r t he R O C C u r v e ImprovedOriginal (c) Rejection rate to Sybil re-quests A r ea unde r t he R O C C u r v e OriginalImproved (d) Rejection to non-Sybil re-quests
Figure 8:
Simulation results in ca-AstroPh. A r ea unde r t he R O C C u r v e ImprovedOriginal (a) Offset factor A r ea unde r t he R O C C u r v e ImprovedOriginal (b) Flooding requests A r ea unde r t he R O C C u r v e ImprovedOriginal (c) Rejection rate to Sybil re-quests A r ea unde r t he R O C C u r v e ImprovedOriginal (d) Rejection rate to non-Sybilrequests
Figure 9:
Simulation results in ca-HepTh. A r ea unde r t he R O C C u r v e ImprovedOriginal (a) Offset factor A r ea unde r t he R O C C u r v e ImprovedOriginal (b) Flooding requests A r ea unde r t he R O C C u r v e ImprovedOriginal (c) Rejection rate to Sybil re-quests A r ea unde r t he R O C C u r v e OriginalImproved (d) Rejection rate to non-Sybilrequests
Figure 10: