Global Trend Radar
arXiv cs.LG (Machine Learning) INT ai 2026-05-08 13:00

The Illusion of Forgetting: Attack Unlearned Diffusion via Initial Latent Variable Optimization

元記事を開く →

分析結果

カテゴリ
教育
重要度
59
トレンドスコア
18
要約
arXiv:2602.00175v2 Announce Type: replace Abstract: Text-to-image diffusion models (DMs) are frequently abused to produce harmful or copyrighted content, violating public interests. Concept erasure (unlearning) is a prom
キーワード
arXiv:2602.00175v2 Announce Type: replace Abstract: Text-to-image diffusion models (DMs) are frequently abused to produce harmful or copyrighted content, violating public interests. Concept erasure (unlearning) is a promising paradigm to alleviate this issue. However, there exists a peculiar forgetting illusion phenomenon with unclear cause. Based on empirical analysis, we formally explain this cause: most unlearning partially disrupt the mapping between linguistic symbols and the underlying internal knowledge, leaving the knowledge intact as dormant memories. We further demonstrate that distributional discrepancy in the denoising process serves as a measurable indicator of how much of the mapping is retained, also reflecting unlearning strength. Inspired by this, we propose IVO (Initial Latent Variable Optimization), a novel attack framework designed to assess the robustness of current unlearning methods. IVO optimizes initial latent variables to realign the noise distribution of unlearned models with that of their vanilla counterparts, which reconstructs the fractured mappings and consequently revives dormant memories. Extensive experiments covering 11 unlearning techniques and 3 concept scenarios show that IVO outperforms state-of-the-art baselines, exposing fundamental flaws in current unlearning mechanisms. Warning: This paper has unsafe images that may offend some readers. arXiv:2602.00175v2 Announce Type: replace Abstract: Text-to-image diffusion models (DMs) are frequently abused to produce harmful or copyrighted content, violating public interests. Concept erasure (unlearning) is a promising paradigm to alleviate this issue. However, there exists a peculiar forgetting illusion phenomenon with unclear cause. Based on empirical analysis, we formally explain this cause: most unlearning partially disrupt the mapping between linguistic symbols and the underlying internal knowledge, leaving the knowledge intact as dormant memories. We further demonstrate that distributional discrepancy in the denoising process serves as a measurable indicator of how much of the mapping is retained, also reflecting unlearning strength. Inspired by this, we propose IVO (Initial Latent Variable Optimization), a novel attack framework designed to assess the robustness of current unlearning methods. IVO optimizes initial latent variables to realign the noise distribution of unlearned models with that of their vanilla counterparts, which reconstructs the fractured mappings and consequently revives dormant memories. Extensive experiments covering 11 unlearning techniques and 3 concept scenarios show that IVO outperforms state-of-the-art baselines, exposing fundamental flaws in current unlearning mechanisms. Warning: This paper has unsafe images that may offend some readers.