PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

What Do LLMs Associate with Your Name? A Human-Centered Black-Box Audit of Personal Data

ArXivSource

Dimitri Staufer, Kirsten Morehouse

cs.HC
cs.AI
cs.CL
cs.CY
|
Feb 19, 2026
3 views

One-line Summary

This study audits how large language models (LLMs) associate personal data with individuals, revealing the models' ability to accurately generate personal information and raising privacy concerns.

Plain-language Overview

Researchers investigated how large language models, like those used in chatbots, associate personal information with people's names. They developed a tool to audit these associations and found that the models could accurately generate details such as gender and languages spoken for well-known individuals, and even for everyday users to some extent. Most participants in the study wanted more control over what information the models associate with their names, highlighting concerns about privacy and data protection. This raises important questions about how personal data is handled by these models and whether privacy laws should apply to them.

Technical Details