PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Evaluating Compliance with Visualization Guidelines in Diagrams for Scientific Publications Using Large Vision Language Models

ArXivSource

Johannes Rückert, Louise Bloch, Christoph M. Friedrich

cs.AI
cs.CL
|
Jun 24, 2025
33 views

One-line Summary

Large Vision Language Models (VLMs) can effectively analyze scientific diagrams for compliance with visualization guidelines, though they struggle with certain aspects like image quality and tick marks.

Plain-language Overview

In scientific publications, diagrams are crucial for conveying data, but they often don't follow established visualization guidelines, potentially leading to misinformation. This study uses advanced AI models, known as Vision Language Models, to examine diagrams and identify where they might violate these guidelines. The models were found to be quite effective at spotting issues like missing labels and unnecessary 3D effects, although they were less reliable at assessing image quality and tick marks. This research suggests that such AI tools could help improve the accuracy and clarity of data visualizations in scientific literature.

Technical Details