Paper

Learning Preferences and Demands in Visual Recommendation

Visual information is an important factor in recommender systems, in which users' selections consist of two components: \emph{preferences} and \emph{demands}. Some studies has been done for modeling users' preferences in visual recommendation. However, conventional methods models items in a common visual feature space, which may fail in capturing \emph{styles} of items. We propose a DeepStyle method for learning style features of items. DeepStyle eliminates the categorical information of items, which is dominant in the original visual feature space, based on a Convolutional Neural Networks (CNN) architecture. For modeling users' demands on different categories of items, the problem can be formulated as recommendation with contextual and sequential information. To solve this problem, we propose a Context-Aware Gated Recurrent Unit (CA-GRU) method, which can capture sequential and contextual information simultaneously. Furthermore, the aggregation of prediction on preferences and demands, i.e., prediction generated by DeepStyle and CA-GRU, can model users' selection behaviors more completely. Experiments conducted on real-world datasets illustrates the effectiveness of our proposed methods in visual recommendation.

Results in Papers With Code
(↓ scroll down to see all results)