Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yafei Yang is active.

Publication


Featured researches published by Yafei Yang.


acm symposium on applied computing | 2009

Defending online reputation systems against collaborative unfair raters through signal modeling and trust

Yafei Yang; Yan Lindsay Sun; Steven Kay; Qing Yang

Online feedback-based rating systems are gaining popularity. Dealing with collaborative unfair ratings in such systems has been recognized as an important but difficult problem. This problem is challenging especially when the number of honest ratings is relatively small and unfair ratings can contribute to a significant portion of the overall ratings. In addition, the lack of unfair rating data from real human users is another obstacle toward realistic evaluation of defense mechanisms. In this paper, we propose a set of methods that jointly detect smart and collaborative unfair ratings based on signal modeling. Based on the detection, a framework of trust-assisted rating aggregation system is developed. Furthermore, we design and launch a Rating Challenge to collect unfair rating data from real human users. The proposed system is evaluated through simulations as well as experiments using real attack data. Compared with existing schemes, the proposed system can significantly reduce the impact from collaborative unfair ratings.


international conference on communications | 2007

Trust Establishment in Distributed Networks: Analysis and Modeling

Yan Lindsay Sun; Yafei Yang

Recently, trust establishment is recognized as an important approach to defend distributed networks, such as mobile ad hoc networks and sensor networks, against malicious attacks. Trust establishment mechanisms can stimulate collaboration among distributed computing and communication entities, facilitate the detection of untrustworthy entities, and assist decision-making in various protocols. In the current literature, the methods proposed for trust establishment are always evaluated through simulation, but theoretical analysis is extremely rare. In this paper, we present a suite of approaches to analyze trust establishment process. These analysis approaches are used to provide in-depth understanding of trust establishment process and quantitative comparison among trust establishment methods. The proposed analysis methods are validated through simulations.


international workshop on security | 2008

RepTrap: a novel attack on feedback-based reputation systems

Yafei Yang; Qinyuan Feng; Yan Lindsay Sun; Yafei Dai

Reputation systems are playing critical roles in securing todays distributed computing and communication systems. Similar to other security mechanisms, reputation systems can be under attack. In this paper, we report the discovery of a new attack, named RepTrap(Reputation Trap), against feedback-based reputation systems, such as those used in P2P file-sharing systems and E-commerce websites(e.g. Amazon.com). We conduct an in-depth investigation on this new attack, including analysis, case study, and performance evaluation based on real data and realistic user behavior models. We discover that the RepTrap is a strong and destructive attack that can manipulate the reputation scores of users, objects, and even undermine the entire reputation system. Compared with other known attacks that achieve the similar goals, the RepTrap requires less effort from the attackers and causes multi-dimensional damage to the reputation systems.


asilomar conference on signals, systems and computers | 2008

Detection of collusion behaviors in online reputation systems

Yuhong Liu; Yafei Yang; Yan Lindsay Sun

Online reputation systems are gaining popularity. Dealing with collaborative unfair ratings in such systems has been recognized as an important but difficult problem. The current defense mechanisms focus on analyzing rating values for individual products. In this paper, we propose a scheme that detects collaborative unfair raters based on similarity in their rating behaviors. The proposed scheme integrates abnormal detection in both rating-value domain and the user-domain. To evaluate the proposed scheme in realistic scenarios, we design and launch a cyber competition, in which attack data from real human users are collected. The proposed system is evaluated through experiments using real attack data. The proposed scheme can accurately detect collusion behaviors and therefore significantly reduce the damage caused by collaborative dishonest users.


international conference on distributed computing systems workshops | 2007

Building Trust in Online Rating Systems Through Signal Modeling

Yafei Yang; Yan Lindsay Sun; Jin Ren; Qing Yang

Online feedback-based rating systems are gaining popularity. Dealing with unfair ratings in such systems has been recognized as an important but difficult problem. This problem is challenging especially when the number of regular ratings is relatively small and the unfair ratings contribute to a significant portion of the overall ratings. In this paper, we propose a novel algorithm to detect the unfair ratings that cannot be effectively prevented by existing state-of-the-art techniques. Our algorithm is particularly effective to detect malicious raters that collaboratively manipulate ratings of one or several products. The main idea of our algorithm is to use an autoregressive signal modeling technique combined with trust-enhanced rating aggregation. We are able to detect and filter out unfair ratings very accurately. Extensive experiments through simulations and real-world data have been performed to validate the proposed algorithm. The experimental results show significant improvements on detecting collaborative unfair raters over existing techniques.


IEEE Transactions on Knowledge and Data Engineering | 2010

Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses

Qinyuan Feng; Yan Lindsay Sun; Ling Liu; Yafei Yang; Yafei Dai

With the popularity of voting systems in cyberspace, there is growing evidence that current voting systems can be manipulated by fake votes. This problem has attracted many researchers working on guarding voting systems in two areas: relieving the effect of dishonest votes by evaluating the trust of voters, and limiting the resources that can be used by attackers, such as the number of voters and the number of votes. In this paper, we argue that powering voting systems with trust and limiting attack resources are not enough. We present a novel attack named as Reputation Trap (RepTrap). Our case study and experiments show that this new attack needs much less resources to manipulate the voting systems and has a much higher success rate compared with existing attacks. We further identify the reasons behind this attack and propose two defense schemes accordingly. In the first scheme, we hide correlation knowledge from attackers to reduce their chance to affect the honest voters. In the second scheme, we introduce robustness-of-evidence, a new metric, in trust calculation to reduce their effect on honest voters. We conduct extensive experiments to validate our approach. The results show that our defense schemes not only can reduce the success rate of attacks but also significantly increase the amount of resources an adversary needs to launch a successful attack.


IEEE Transactions on Information Forensics and Security | 2009

Securing Rating Aggregation Systems Using Statistical Detectors and Trust

Yafei Yang; Yan Sun; Steven Kay; Qing Yang

Online feedback-based rating systems are gaining popularity. Dealing with unfair ratings in such systems has been recognized as an important but difficult problem. This problem is challenging especially when the number of regular ratings is relatively small and unfair ratings can contribute to a significant portion of the overall ratings. Furthermore, the lack of unfair rating data from real human users is another obstacle toward realistic evaluation of defense mechanisms. In this paper, we propose a set of statistical methods to jointly detect collaborative unfair ratings in product-rating type online rating systems. Based on detection, a framework of trust-assisted rating aggregation system is developed. Furthermore, we collect unfair rating data from real human users through a rating challenge. The proposed system is evaluated through simulations as well as experiments using real attack data. Compared with existing schemes, the proposed system can significantly reduce negative impact from unfair ratings.


Journal of Computer Science and Technology | 2009

Dishonest behaviors in online rating systems: cyber competition, attack models, and attack generator

Yafei Yang; Qinyuan Feng; Yan Lindsay Sun; Yafei Dai

Recently, online rating systems are gaining popularity. Dealing with unfair ratings in such systems has been recognized as an important but challenging problem. Many unfair rating detection approaches have been developed and evaluated against simple attack models. However, the lack of unfair rating data from real human users and realistic attack behavior models has become an obstacle toward developing reliable rating systems. To solve this problem, we design and launch a rating challenge to collect unfair rating data from real human users. In order to broaden the scope of the data collection, we also develop a comprehensive signal-based unfair rating detection system. Based on the analysis of real attack data, we discover important features in unfair ratings, build attack models, and develop an unfair rating generator. The models and generator developed in this paper can be directly used to test current rating aggregation systems, as well as to assist the design of future rating systems.


international conference on distributed computing systems workshops | 2008

Modeling Attack Behaviors in Rating Systems

Qinyuan Feng; Yafei Yang; Yan Lindsay Sun; Yafei Dai

Online feedback-based rating systems are gaining popularity. Dealing with unfair ratings in such systems has been recognized as an important problem and many unfair rating detection approaches have been developed. Currently, these approaches are evaluated against simple attack models, but complicated attacking strategies can be used by attackers in the real world. The lack of unfair rating data from real human users and realistic attack behavior models has become an obstacle toward developing reliable rating systems. To solve this problem, we design and launch a rating challenge to collect unfair rating data from real human users. In order to broaden the scope of the data collection, we also develop a comprehensive signal-based unfair rating detection system. Based on the analysis of real attack data, we discover important features in unfair ratings, build attack models, and develop an unfair rating generator. The models and generator developed in this paper can be directly used to evaluate current rating aggregation systems, as well as to assist the design of future rating systems.


global communications conference | 2008

Securing Time-Synchronization Protocols in Sensor Networks: Attack Detection and Self-Healing

Yafei Yang; Yan Sun

There have been many time synchronization protocols proposed for sensor networks. However, the security problems related to time synchronization have not been fully solved yet. If malicious entities manipulate time synchronization, failures of many functionalities in the sensor networks would occur. In this paper, we identify various attacks against time synchronization and then develop a detection and self-healing scheme to defeat those attacks. The proposed scheme has three phases: (1) abnormality detection performed by individual sensors, (2) trust- based malicious node detection performed by the base station, and (3) self-healing through changing the topology of synchronization tree. Simulations are performed to demonstrate the effectiveness of the proposed scheme as well as the implementation overhead.

Collaboration


Dive into the Yafei Yang's collaboration.

Top Co-Authors

Avatar

Yan Lindsay Sun

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar

Qing Yang

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar

Yan Sun

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven Kay

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge