IEEE Transactions on Information Forensics and Security | 2021

Optimal Adversarial Policies in the Multiplicative Learning System With a Malicious Expert

 
 
 
 

Abstract


We consider a learning system based on the conventional multiplicative weight (MW) rule that combines experts’ advice to predict a sequence of true outcomes. It is assumed that one of the experts is malicious and aims to impose the maximum loss on the system. The system’s loss is naturally defined to be the aggregate absolute difference between the sequence of predicted outcomes and the true outcomes. We consider this problem under both offline and online settings. In the offline setting where the malicious expert must choose its entire sequence of decisions a priori, we show somewhat surprisingly that a simple greedy policy of always reporting false prediction is asymptotically optimal with an approximation ratio of <inline-formula> <tex-math notation= LaTeX >$1+O\\left({\\sqrt {\\frac {\\ln N}{N}}}\\right)$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation= LaTeX >$N$ </tex-math></inline-formula> is the total number of prediction stages. In particular, we describe a policy that closely resembles the structure of the optimal offline policy. For the online setting where the malicious expert can adaptively make its decisions, we show that the optimal online policy can be efficiently computed by solving a dynamic program in <inline-formula> <tex-math notation= LaTeX >$O(N^{3})$ </tex-math></inline-formula>. We also discuss a generalization of our model to multi-expert settings. Our results provide a new direction for vulnerability assessment of commonly-used learning algorithms to internal adversarial attacks.

Volume 16
Pages 2276-2287
DOI 10.1109/TIFS.2021.3052360
Language English
Journal IEEE Transactions on Information Forensics and Security

Full Text