PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Polyglots or Multitudes? Multilingual LLM Answers to Value-laden Multiple-Choice Questions

ArXivSource

Léo Labat, Etienne Ollion, François Yvon

cs.CL
|
Feb 5, 2026
320 views

One-line Summary

This study examines how multilingual large language models (LLMs) respond to value-laden multiple-choice questions across different languages, revealing variability in consistency and language-specific behaviors.

Plain-language Overview

Researchers investigated whether multilingual AI models, like chatbots, answer value-based questions consistently across different languages. They created a unique dataset with survey questions translated into eight European languages to test this. The study found that larger, well-trained models were generally more consistent, but their answers varied depending on the question. Some questions showed agreement among models, while others did not, and certain questions triggered language-specific responses, suggesting that how models are fine-tuned affects their answers.

Technical Details