PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

ArXivSource

Obaidullah Zaland, Sajib Mistry, Monowar Bhuyan

cs.LG
cs.DC
|
Feb 19, 2026
2 views

One-line Summary

The paper introduces KD-UFSL, a method to enhance privacy in federated split learning by protecting intermediate data representations using k-anonymity and differential privacy techniques.

Plain-language Overview

In scenarios where large amounts of data are spread across many users, it's important to train machine learning models without compromising privacy. Federated learning allows models to be trained without gathering all the data in one place, but this can strain users' devices. A method called U-shaped federated split learning (UFSL) helps by moving some of the processing to a central server, but this requires sharing 'smashed data' that can risk users' privacy. This paper introduces KD-UFSL, which uses techniques like k-anonymity and differential privacy to protect this shared data from revealing private information. The approach effectively increases privacy while maintaining the usefulness of the machine learning model.

Technical Details