Bingqian Li, Bowen Zheng, Xiaolei Wang, Long Zhang, Jinpeng Wang, Sheng Chen, Wayne Xin Zhao, Ji-rong Wen
ILRec improves LLM-based recommendation systems by using self-hard negatives from intermediate layers for better preference learning.
This study introduces ILRec, a new approach to enhance recommendation systems that use large language models (LLMs). Traditional methods often struggle with effectively incorporating negative examples into training, which are crucial for fine-tuning recommendations. ILRec tackles this by using 'self-hard negatives'—signals from the model's own intermediate layers that provide more nuanced and dynamic negative feedback. This helps the model learn preferences more effectively and improves its recommendation accuracy.