PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Bridging the Copyright Gap: Do Large Vision-Language Models Recognize and Respect Copyrighted Content?

ArXivSource

Naen Xu, Jinghuai Zhang, Changjiang Li, Hengyu An, Chunyi Zhou, Jun Wang, Boyu Xu, Yuyuan Li, Tianyu Du, Shouling Ji

cs.CL
cs.AI
cs.CR
cs.CY
|
Dec 26, 2025
3 views

One-line Summary

Large vision-language models struggle to recognize and respect copyrighted content, prompting the need for enhanced copyright compliance tools.

Plain-language Overview

Large vision-language models (LVLMs) are powerful tools that can understand and generate content from both images and text. However, there's a concern about whether these models can recognize and respect copyrighted material, like book excerpts or song lyrics, which is crucial to avoid legal issues. This study evaluated various LVLMs using a large dataset to see how well they handle copyrighted content. The results showed that even the most advanced models often fail to recognize copyright notices, highlighting the need for improved systems to ensure they comply with copyright laws.

Technical Details