PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Correctness-Optimized Residual Activation Lens (CORAL): Transferrable and Calibration-Aware Inference-Time Steering

arXivSource

Miranda Muqing Miao, Young-Min Cho, Lyle Ungar

cs.LG
|
Feb 5, 2026
4 views

One-line Summary

CORAL is a method that improves the accuracy and calibration of large language models during inference without retraining.

Plain-language Overview

Large language models often struggle with being well-calibrated, meaning they can be overconfident or underconfident in their predictions, especially after undergoing certain training processes. Retraining these models to improve calibration can be very costly. CORAL is a new technique that improves model accuracy and confidence without needing to change the model's original training. It works by using a special method to adjust the model's internal signals during inference, leading to better performance on various question-answering benchmarks.

Technical Details