PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers

ArXivSource

Bingqian Li, Bowen Zheng, Xiaolei Wang, Long Zhang, Jinpeng Wang, Sheng Chen, Wayne Xin Zhao, Ji-rong Wen

cs.IR
cs.AI
|
Feb 19, 2026
3 views

One-line Summary

ILRec improves LLM-based recommendation systems by using self-hard negatives from intermediate layers for better preference learning.

Plain-language Overview

This study introduces ILRec, a new approach to enhance recommendation systems that use large language models (LLMs). Traditional methods often struggle with effectively incorporating negative examples into training, which are crucial for fine-tuning recommendations. ILRec tackles this by using 'self-hard negatives'—signals from the model's own intermediate layers that provide more nuanced and dynamic negative feedback. This helps the model learn preferences more effectively and improves its recommendation accuracy.

Technical Details