ReXPlug: Explainable Recommendation using Plug and Play Language Model

Explainable Recommendations provide the reasons behind why an item is recommended to a user, which often leads to increased user satisfaction and persuasiveness. An intuitive way to explain recommendations is by generating a synthetic personalized natural language review for a user-item pair. Although there exist some approaches in the literature that explain recommendations by generating reviews, the quality of the reviews is questionable. Besides, these methods usually take considerable time to train the underlying language model responsible for generating the text.

  In this work, we propose ReXPlug, an end-to-end framework with a plug and play way of explaining recommendations. ReXPlug predicts accurate ratings as well as exploits Plug and Play Language Model to generate high-quality reviews. We train a simple sentiment classifier for  controlling a pre-trained language model for the generation, bypassing the language model’s training from scratch again. Such a simple and neat model is much easier to implement and train, and hence, very efficient for generating reviews. We personalize the reviews by leveraging a special jointly-trained cross attention network. Our detailed experiments show that ReXPlug outperforms many recent models across various datasets on rating prediction by utilizing textual reviews as a regularizer. Quantitative analysis shows that the reviews generated by ReXPlug are semantically close to the ground truth reviews, while the qualitative analysis demonstrates the high quality of the generated reviews, both from empirical and analytical viewpoints.

References:

Deepesh Hada, Vijaikumar M and Shirish Shevade, ReXPlug: Explainable Recommendations   using Plug-and-Play Language Model, Proceedings of the 44th International ACM SIGIR  Conference on Research and Development in Information Retrieval (SIGIR 2021), 2021

Faculty: Shirish Shevede, CSA
Click image to view enlarged version
Scroll Up