PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Ready to Translate, Not to Represent? Bias and Performance Gaps in Multilingual LLMs Across Language Families and Domains

arXivSource

Md. Faiyaz Abdullah Sayeedi, Md. Mahbub Alam, Subhey Sadi Rahman, Md. Adnanul Islam, Jannatul Ferdous Deepti, Tasnim Mohiuddin, Md Mofijul Islam, Swakkhar Shatabda

cs.CL
|
Oct 9, 2025
3 views

One-line Summary

This study introduces Translation Tangles, a framework to evaluate translation quality and fairness in multilingual LLMs, highlighting performance and bias issues across languages and domains.

Plain-language Overview

Large Language Models (LLMs) have significantly improved machine translation, offering fluent translations across many languages and domains. However, these models often struggle with consistency in performance, particularly with less common languages and specialized topics. They can also perpetuate biases found in their training data, raising fairness concerns. To tackle these issues, the researchers developed Translation Tangles, a tool for assessing translation quality and fairness, and created a bias-annotated dataset to help improve LLMs' performance and equity.

Technical Details