PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models

arXivSource

Xueqi Cheng, Minxing Zheng, Shixiang Zhu, Yushun Dong

cs.AI
|
Jun 3, 2025
2 views

One-line Summary

MISLEADER is a novel defense strategy against model extraction attacks that uses ensembles of distilled models to maintain utility while reducing extractability without relying on out-of-distribution assumptions.

Plain-language Overview

Model extraction attacks are a threat to machine learning services, as they can replicate the functionality of a model just by querying it. These attacks jeopardize the intellectual property of companies offering machine-learning-as-a-service. Traditional defenses assume attackers use unusual data, which is not always true. MISLEADER is a new defense approach that protects models without relying on these assumptions. It uses a combination of data augmentation and diverse model ensembles to make it harder for attackers to clone the model, while still ensuring the model works well for regular users.

Technical Details