PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Convergence Analysis of Two-Layer Neural Networks under Gaussian Input Masking

ArXivSource

Afroditi Kolomvaki, Fangshuo Liao, Evan Dramko, Ziyun Guang, Anastasios Kyrillidis

cs.LG
cs.AI
cs.DS
math.OC
|
Feb 19, 2026
4 views

One-line Summary

The paper analyzes the convergence of two-layer neural networks trained with Gaussian-masked inputs, finding linear convergence up to an error determined by the mask's variance.

Plain-language Overview

This research examines how two-layer neural networks perform when trained with inputs that have been randomly masked using a Gaussian distribution. This is relevant in situations like privacy-preserving training or federated learning, where data may be incomplete or noisy. The study finds that these networks can still converge effectively, achieving a predictable rate of learning that is influenced by the degree of noise introduced by the masking. The work also addresses a complex issue related to how randomness affects non-linear activations in neural networks.

Technical Details