Weiqi Wang, Zhiyi Tian, Chenhan Zhang, Luoyu Chen, Shui Yu
EVE introduces an efficient method for verifying machine unlearning by using customized perturbations, eliminating the need for initial model training involvement and improving verification accuracy and speed.
The paper presents a new method called EVE for checking whether data has been successfully 'unlearned' from machine learning models. Unlike previous methods that require changing the model during its initial training to verify unlearning later, EVE works without needing such early intervention. It does this by slightly altering the data to see if the model's predictions change after unlearning, which serves as proof that unlearning occurred. This approach is not only more efficient but also more accurate compared to existing methods.