Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval | 2021

A Study of Defensive Methods to Protect Visual Recommendation Against Adversarial Manipulation of Images

 
 
 
 
 

Abstract


Visual-based recommender systems (VRSs) enhance recommendation performance by integrating users feedback with the visual features of items images. Recently, human-imperceptible image perturbations, defined adversarial samples, have been shown capable of altering the VRSs performance, for example, by pushing (promoting) or nuking (demoting) specific categories of products. One of the most effective adversarial defense methods is adversarial training (AT), which enhances the robustness of the model by incorporating adversarial samples into the training process and minimizing an adversarial risk. The AT effectiveness has been verified on defending DNNs in supervised learning tasks such as image classification. However, the extent to which AT can protect deep VRSs, against adversarial perturbation of images remains mostly under-investigated. This work focuses on the defensive side of VRSs and provides general insights that could be further exploited to broaden the frontier in the field. First, we introduce a suite of adversarial attacks against DNNs on top of VRSs, and defense strategies to counteract them. Next, we present an evaluation framework, named Visual Adversarial Recommender (VAR), to empirically investigate the performance of defended or undefended DNNs in various visually-aware item recommendation tasks. The results of large-scale experiments indicate alarming risks in protecting a VRS through the DNN robustification. Source code and data are available at https://github.com/sisinflab/Visual-Adversarial-Recommendation.

Volume None
Pages None
DOI 10.1145/3404835.3462848
Language English
Journal Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval

Full Text