PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning

ArXivSource

Obaidullah Zaland, Zulfiqar Ahmad Khan, Monowar Bhuyan

cs.LG
cs.DC
|
Feb 19, 2026
2 views

One-line Summary

This paper introduces One-Shot Incremental Federated Learning (OSI-FL), a framework that addresses communication overhead and catastrophic forgetting in federated learning by using category-specific embeddings and selective sample retention.

Plain-language Overview

In today's data-driven world, managing large and diverse data streams while maintaining privacy is a big challenge. Federated learning is a popular approach because it allows for collaborative model training without sharing raw data. However, it struggles with incremental data and communication limits. This research presents a new method called One-Shot Incremental Federated Learning (OSI-FL), which reduces communication needs and prevents the model from forgetting previously learned tasks. It uses specialized data embeddings and a technique to keep the most useful data samples for training. This approach has shown better performance than existing methods in various scenarios.

Technical Details