PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Detecting Misbehaviors of Large Vision-Language Models by Evidential Uncertainty Quantification

ArXivSource

Tao Huang, Rui Wang, Xiaofei Liu, Yi Qin, Li Duan, Liping Jing

cs.LG
|
Feb 5, 2026
258 views

One-line Summary

The paper introduces Evidential Uncertainty Quantification (EUQ), a method to detect misbehaviors in large vision-language models by assessing internal conflicts and knowledge gaps, outperforming existing methods in identifying issues like hallucinations and adversarial vulnerabilities.

Plain-language Overview

Large vision-language models (LVLMs), which understand and generate content from both images and text, sometimes produce unreliable or harmful outputs, especially when given tricky inputs. This can be dangerous in critical applications. The paper proposes a new method called Evidential Uncertainty Quantification (EUQ) to better detect these issues by identifying internal conflicts and gaps in the model's knowledge. The researchers tested EUQ on various problems like hallucinations and adversarial attacks, finding it more effective than current techniques.

Technical Details