Adversarial Factorization Machine: Towards accurate, robust, and unbiased recommenders


The model-based collaborative filtering (CF) such as factorization machine (FM) were proposed to approach recommendation as a representation learning problem. Though FM takes rich features including user-profiles and item attributes into consideration, due to the high sparseness of the data in the recommendation system, the precision of individual feature embedding is limited and vulnerable to adversarial perturbations. Moreover, data in the real world is largely unbalanced, leading to the biased recommendation result to long-tailed groups. Adversarial training, which enhances model parameters by small, intentional perturbations, is claimed in previous works to have positive effects on improving the generalization ability and robustness of the model. This project aims to build an end-to-end adversarial recommendation architecture to perturb recommender parameters into a more robust, unbiased, and accurate status automatically based on Factorization Machine.
In this project, we propose an Adversarial Factorization Machine (AdvFM), a novel method that learns adversarial perturbation levels given the signal strength distribution of user and item attributes during adversarial training. We conduct extensive experiments on three public datasets constructed from Yelp, Pinterest, and MovieLens-100K that represent various item recommendation scenarios.