PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Aligned but Stereotypical? The Hidden Influence of System Prompts on Social Bias in LVLM-Based Text-to-Image Models

ArXivSource

NaHyeon Park, Namin An, Kunhee Kim, Soyeon Yoon, Jiahao Huo, Hyunjung Shim

cs.CV
cs.LG
|
Dec 4, 2025
5 views

One-line Summary

This paper reveals that system prompts in large vision-language model-based text-to-image systems significantly contribute to social biases, and proposes a framework called FairPro to reduce these biases while maintaining text-image alignment.

Plain-language Overview

The study investigates how large vision-language models used for generating images from text might perpetuate social biases. These models, which are becoming standard for creating images based on text descriptions, can produce biased images due to the influence of system prompts, which are the predefined instructions guiding these models. The researchers developed a method called FairPro that allows these models to check themselves for bias and adjust their behavior to be fairer without needing additional training. This approach helps create more socially responsible systems that still accurately align images with text descriptions.

Technical Details