Obaidullah Zaland, Sajib Mistry, Monowar Bhuyan
The paper introduces KD-UFSL, a method to enhance privacy in federated split learning by protecting intermediate data representations using k-anonymity and differential privacy techniques.
In scenarios where large amounts of data are spread across many users, it's important to train machine learning models without compromising privacy. Federated learning allows models to be trained without gathering all the data in one place, but this can strain users' devices. A method called U-shaped federated split learning (UFSL) helps by moving some of the processing to a central server, but this requires sharing 'smashed data' that can risk users' privacy. This paper introduces KD-UFSL, which uses techniques like k-anonymity and differential privacy to protect this shared data from revealing private information. The approach effectively increases privacy while maintaining the usefulness of the machine learning model.