Posts by Collection

portfolio

publications

LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty

Published in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

We present LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models, avoiding retraining from scratch. LoTUS smooths the prediction probabilities of the model up to an information-theoretic bound, mitigating its over-confidence stemming from data memorization. We evaluate LoTUS on Transformer and ResNet18 models against eight baselines across five public datasets. Beyond established MU benchmarks, we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining is impractical, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. The experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. Code: https://github.com/cspartalis/LoTUS.

Recommended citation: Spartalis et al. "LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2025.
Download Paper | Download Slides | Download Bibtex

Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI

Published in ICML 2025 Workshop on Machine Unlearning for Generative AI, 2025

We introduce SAFEMax, a novel method for Machine Unlearning in diffusion models. Grounded in information-theoretic principles, SAFEMax maximizes the entropy in generated images, causing the model to generate Gaussian noise when conditioned on impermissible classes by ultimately halting its denoising process. Also, our method controls the balance between forgetting and retention by selectively focusing on the early diffusion steps, where class-specific information is prominent. Our results demonstrate the effectiveness of SAFEMax and highlight its substantial efficiency gains over state-of-the-art methods.

Recommended citation: Spartalis et al. "Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI." ICML 2025 Workshop on Machine Unlearning for Generative AI. 2025.
Download Paper | Download Bibtex

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.