PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Convergence Rate of the Last Iterate of Stochastic Proximal Algorithms

ArXivSource

Kevin Kurian Thomas Vaidyan, Michael P. Friedlander, Ahmet Alacaoglu

math.OC
cs.LG
stat.ML
|
Feb 5, 2026
3 views

One-line Summary

This paper establishes optimal convergence rates for the last iterate of stochastic proximal algorithms without assuming bounded variance, applicable to problems in multi-task and federated learning.

Plain-language Overview

The study investigates two algorithms used for optimizing problems where the goal is to minimize a function that is a combination of smooth and nonsmooth components. These algorithms are particularly useful in situations like multi-task learning and federated learning, where tasks are interconnected. The authors show that these algorithms can achieve optimal convergence rates without needing a common assumption about variance, making them more broadly applicable. This advancement helps improve the performance of these algorithms in practical applications involving complex data structures.

Technical Details