PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Assessing Risk of Stealing Proprietary Models for Medical Imaging Tasks

arXivSource

Ankita Raj, Harsh Swaika, Deepankar Varma, Chetan Arora

cs.CV
|
Jun 24, 2025
5 views

One-line Summary

The study shows that proprietary medical imaging models are vulnerable to model stealing attacks, even with limited resources, and introduces a new method called QueryWise to enhance these attacks.

Plain-language Overview

This research explores how deep learning models used in medical imaging, like those for diagnosing diseases, can be vulnerable to being copied or 'stolen' by others. Even when companies keep the internal workings of these models hidden, attackers can still recreate them by sending in their own data and learning from the model's responses. The study specifically looks at how this can happen even when attackers don't have access to the original training data and have limited opportunities to interact with the model. A new method called QueryWise is introduced to improve the effectiveness of these attacks using publicly available data, and tests show it works well on models for detecting gallbladder cancer and COVID-19.

Technical Details