PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Scaling Adversarial Training via Data Selection

ArXivSource

Youran Ye, Dejin Wang, Ajinkya Bhandare

cs.LG
|
Dec 26, 2025
3 views

One-line Summary

Selective Adversarial Training reduces computational costs by perturbing only critical samples, achieving comparable or better robustness than full PGD adversarial training.

Plain-language Overview

Adversarial training is a method used to make machine learning models more robust against attacks, but it can be very computationally expensive. This paper introduces a new approach called Selective Adversarial Training, which focuses on only the most important samples that are likely to affect the model's robustness, rather than processing every sample equally. By selecting samples that are near the decision boundary or have gradients that align with the main optimization direction, the method reduces the computational burden significantly. The experiments show that this approach can achieve similar or even better robustness compared to traditional methods, while cutting computation time in half.

Technical Details