Afroditi Kolomvaki, Fangshuo Liao, Evan Dramko, Ziyun Guang, Anastasios Kyrillidis
The paper analyzes the convergence of two-layer neural networks trained with Gaussian-masked inputs, finding linear convergence up to an error determined by the mask's variance.
This research examines how two-layer neural networks perform when trained with inputs that have been randomly masked using a Gaussian distribution. This is relevant in situations like privacy-preserving training or federated learning, where data may be incomplete or noisy. The study finds that these networks can still converge effectively, achieving a predictable rate of learning that is influenced by the degree of noise introduced by the masking. The work also addresses a complex issue related to how randomness affects non-linear activations in neural networks.