PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

A Set of Generalized Components to Achieve Effective Poison-only Clean-label Backdoor Attacks with Collaborative Sample Selection and Triggers

arXivSource

Zhixiao Wu, Yao Lu, Jie Wen, Hao Sun, Qi Zhou, Guangming Lu

cs.AI
|
Sep 24, 2025
15 views

One-line Summary

The paper proposes a set of components to improve the effectiveness and stealthiness of poison-only clean-label backdoor attacks by collaboratively optimizing sample selection and trigger design.

Plain-language Overview

This research addresses how to make backdoor attacks on deep neural networks more effective and harder to detect. These attacks involve secretly altering a dataset in a way that causes the trained model to behave in a specific, attacker-desired way, without changing the labels of the data. The study introduces new methods to select which data samples to alter and how to modify them to maximize the attack's success while remaining undetected. The proposed approach combines different techniques to improve both the attack's stealthiness and its success rate.

Technical Details