Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huseyin Polat is active.

Publication


Featured researches published by Huseyin Polat.


Artificial Intelligence Review | 2014

Shilling attacks against recommender systems: a comprehensive survey

Ihsan Gunes; Cihan Kaleli; Alper Bilge; Huseyin Polat

Online vendors employ collaborative filtering algorithms to provide recommendations to their customers so that they can increase their sales and profits. Although recommendation schemes are successful in e-commerce sites, they are vulnerable to shilling or profile injection attacks. On one hand, online shopping sites utilize collaborative filtering schemes to enhance their competitive edge over other companies. On the other hand, malicious users and/or competing vendors might decide to insert fake profiles into the user-item matrices in such a way so that they can affect the predicted ratings on behalf of their advantages. In the past decade, various studies have been conducted to scrutinize different shilling attacks strategies, profile injection attack types, shilling attack detection schemes, robust algorithms proposed to overcome such attacks, and evaluate them with respect to accuracy, cost/benefit, and overall performance. Due to their popularity and importance, we survey about shilling attacks in collaborative filtering algorithms. Giving an overall picture about various shilling attack types by introducing new classification attributes is imperative for further research. Explaining shilling attack detection schemes in detail and robust algorithms proposed so far might open a lead to develop new detection schemes and enhance such robust algorithms further, even propose new ones. Thus, we describe various attack types and introduce new dimensions for attack classification. Detailed description of the proposed detection and robust recommendation algorithms are given. Moreover, we briefly explain evaluation of the proposed schemes. We conclude the paper by discussing various open questions.


International Journal of Electronic Commerce | 2006

Privacy-preserving collaborative filtering

Wenliang Du; Huseyin Polat

Collaborative filtering (CF) techniques are becoming very popular on the Internet and are widely used in several domains to cope with information overload. E-commerce sites use filtering systems to recommend products to customers based on the preferences of like-minded customers, but their systems do not protect user privacy. Because users concerned about privacy may give false information, it is not easy to collect high-quality user data for collaborative filtering, and recommendation systems using poor data produce inaccurate recommendations. This means that privacy measures are key to the success of collecting high-quality data and providing accurate recommendations. This article discusses collaborative filtering with privacy based on both correlation and singular-value decomposition (SVD) and proposes the use of randomized perturbation techniques to protect user privacy while producing reasonably accurate recommendations. Such techniques add randomness to the original data, preventing the data collector (the server) from learning private user data, but this scheme can still provide accurate recommendations. Experiments were conducted with real datasets to evaluate the overall performance of the proposed scheme. The results were used for analysis of how different parameters affect accuracy. Collaborative filtering systems using randomized perturbation techniques were found to provide accurate recommendations while preserving user privacy.


european conference on machine learning | 2005

Privacy-preserving collaborative filtering on vertically partitioned data

Huseyin Polat; Wenliang Du

Collaborative filtering (CF) systems are widely used by E-commerce sites to provide predictions using existing databases comprised of ratings recorded from groups of people evaluating various items, sometimes, however, such systems’ ratings are split among different parties. To provide better filtering services, such parties may wish to share their data. However, due to privacy concerns, data owners do not want to disclose data. This paper presents a privacy-preserving protocol for CF grounded on vertically partitioned data. We conducted various experiments to evaluate the overall performance of our scheme.


Knowledge and Information Systems | 2012

Privacy-preserving hybrid collaborative filtering on cross distributed data

Ibrahim Yakut; Huseyin Polat

Data collected for collaborative filtering (CF) purposes might be cross distributed between two online vendors, even competing companies. Such corporations might want to integrate their data to provide more precise and reliable recommendations. However, due to privacy, legal, and financial concerns, they do not desire to disclose their private data to each other. If privacy-preserving measures are introduced, they might decide to generate predictions based on their distributed data collaboratively. In this study, we investigate how to offer hybrid CF-based referrals with decent accuracy on cross distributed data (CDD) between two e-commerce sites while maintaining their privacy. Our proposed schemes should prevent data holders from learning true ratings and rated items held by each other while still allowing them to provide accurate CF services efficiently. We perform real data-based experiments to evaluate our proposals in terms of accuracy. The results show that the proposed methods are able to provide precise predictions. Moreover, we analyze our schemes in terms of privacy and supplementary costs. We demonstrate that our schemes are secure, and online overhead costs due to privacy concerns are insignificant.


european conference on principles of data mining and knowledge discovery | 2007

Providing Naïve Bayesian Classifier-Based Private Recommendations on Partitioned Data

Cihan Kaleli; Huseyin Polat

Data collected for collaborative filtering (CF) purposes might be split between various parties. Integrating such data is helpful for both e-companies and customers due to mutual advantageous. However, due to privacy reasons, data owners do not want to disclose their data. We hypothesize that if privacy measures are provided, data holders might decide to integrate their data to perform richer CF services. In this paper, we investigate how to achieve naive Bayesian classifier (NBC)-based CF tasks on partitioned data with privacy. We perform experiments on real data, analyze our outcomes, and provide some suggestions.


atlantic web intelligence conference | 2007

Providing Private Recommendations Using Naïve Bayesian Classifier

Cihan Kaleli; Huseyin Polat

Today’s CF systems fail to protect users’ privacy. Without privacy protection, it becomes a challenge to collect sufficient and high quality data for CF. With privacy protection, users feel comfortable to provide more truthful and dependable data. In this paper, we propose to employ randomized response techniques (RRT) to protect users’ privacy while producing accurate referrals using naive Bayesian classifier (NBC), which is one of the most successful learning algorithms. We perform various experiments using real data sets to evaluate our privacy-preserving schemes.


Artificial Intelligence Review | 2010

Private predictions on hidden Markov models

Huseyin Polat; Wenliang Du; Sahin Renckes; Yusuf Oysal

Hidden Markov models (HMMs) are widely used in practice to make predictions. They are becoming increasingly popular models as part of prediction systems in finance, marketing, bio-informatics, speech recognition, signal processing, and so on. However, traditional HMMs do not allow people and model owners to generate predictions without disclosing their private information to each other. To address the increasing needs for privacy, this work identifies and studies the private prediction problem; it is demonstrated with the following scenario: Bob has a private HMM, while Alice has a private input; and she wants to use Bob’s model to make a prediction based on her input. However, Alice does not want to disclose her private input to Bob, while Bob wants to prevent Alice from deriving information about his model. How can Alice and Bob perform HMMs-based predictions without violating their privacy? We propose privacy-preserving protocols to produce predictions on HMMs without greatly exposing Bob’s and Alice’s privacy. We then analyze our schemes in terms of accuracy, privacy, and performance. Since they are conflicting goals, due to privacy concerns, it is expected that accuracy or performance might degrade. However, our schemes make it possible for Bob and Alice to produce the same predictions efficiently while preserving their privacy.


Procedia Computer Science | 2014

A Novel Shilling Attack Detection Method

Alper Bilge; Zeynep Ozdemir; Huseyin Polat

Abstract Recommender systems provide an impressive way to overcome information overload problem. However, they are vulnerable to profile injection or shilling attacks. Malicious users and/or parties might construct fake profiles and inject them into user-item databases to increase or decrease the popularity of some target products. Hence, they may have an effective impact on produced predictions. To eliminate such malicious impact, detecting shilling profiles becomes imperative. In this work, we propose a novel shilling attack detection method for particularly specific attacks based on bisecting k-means clustering approach, which provides that attack profiles are gathered in a leaf node of a binary decision tree. After evaluating our method, we perform experiments using a benchmark data set to analyze it with respect to success of attack detection. Our empirical outcomes show that the method is extremely successful on detecting specific attack profiles like bandwagon, segment, and average attack.


Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery | 2015

From existing trends to future trends in privacy-preserving collaborative filtering

Adem Ozturk; Huseyin Polat

The information overload problem, also known as infobesity, forces online vendors to utilize collaborative filtering algorithms. Although various recommendation methods are widely used by many electronic commerce sites, they still have substantial problems, including but not limited to privacy, accuracy, online performance, scalability, cold start, coverage, grey sheep, robustness, being subject to shilling attacks, diversity, data sparsity, and synonymy. Privacy‐preserving collaborative filtering methods have been proposed to handle the privacy problem. Due to the increasing popularity of privacy protection and recommendation estimation over the Internet, prediction schemes with privacy are still receiving increasing attention. Because research trends might change over time, it is critical for researchers to observe future trends. In this study, we determine the existing trends in the privacy‐preserving collaborative filtering field by examining the related papers published mainly in the last few years. Comprehensive examinations of the most up‐to‐date related studies are described. By scrutinizing the contemporary inclinations, we present the most promising possible research trends in the near future. Our proposals can help interested researchers direct their research toward better outcomes and might open new ways to enrich privacy‐preserving collaborative filtering studies. WIREs Data Mining Knowl Discov 2015, 5:276–291. doi: 10.1002/widm.1163


Expert Systems With Applications | 2014

Robustness analysis of privacy-preserving model-based recommendation schemes

Alper Bilge; Ihsan Gunes; Huseyin Polat

Privacy-preserving model-based recommendation methods are preferable over privacy-preserving memory-based schemes due to their online efficiency. Model-based prediction algorithms without privacy concerns have been investigated with respect to shilling attacks. Similarly, various privacy-preserving model-based recommendation techniques have been proposed to handle privacy issues. However, privacy-preserving model-based collaborative filtering schemes might be subjected to shilling or profile injection attacks. Therefore, their robustness against such attacks should be scrutinized. In this paper, we investigate robustness of four well-known privacy-preserving model-based recommendation methods against six shilling attacks. We first apply masked data-based profile injection attacks to privacy-preserving k-means-, discrete wavelet transform-, singular value decomposition-, and item-based prediction algorithms. We then perform comprehensive experiments using real data to evaluate their robustness against profile injection attacks. Next, we compare non-private model-based methods with their privacy-preserving correspondences in terms of robustness. Moreover, well-known privacy-preserving memory- and model-based prediction methods are compared with respect to robustness against shilling attacks. Our empirical analysis show that couple of model-based schemes with privacy are very robust.

Collaboration


Dive into the Huseyin Polat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehmet Koc

Bilecik Şeyh Edebali University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge