Global Trend Radar
arXiv cs.AI INT ai 2026-04-28 13:00

自分の声に合わせる:LVLMにおける幻覚軽減のための自己修正型嗜好学習

原題: Aligning with Your Own Voice: Self-Corrected Preference Learning for Hallucination Mitigation in LVLMs

元記事を開く →

分析結果

カテゴリ
教育
重要度
59
トレンドスコア
18
要約
大規模ビジョン・言語モデル(LVLM)は、幻覚に悩まされることが多い。本研究では、既存の嗜好学習に基づくアプローチの限界を克服するための新しい手法を提案する。
キーワード
arXiv:2604.24395v1 Announce Type: new Abstract: Large Vision-Language Models (LVLMs) frequently suffer from hallucinations. Existing preference learning-based approaches largely rely on proprietary models to construct preference datasets. We identify that this reliance introduces a distributional mismatch between the proprietary and target models that hinders efficient alignment. To address this, we propose Alignment via VErified Self-correction DPO (AVES-DPO), a framework that aligns LVLMs using in-distribution data derived from the model's intrinsic knowledge. Our approach employs a consensus-based verification mechanism to diagnose diverse hallucinations and guides the model to self-correct, thereby generating preference pairs strictly compatible with its internal distribution. Extensive experiments demonstrate that AVES-DPO surpasses existing baselines in hallucination mitigation while requiring only 5.2k samples. arXiv:2604.24395v1 Announce Type: new Abstract: Large Vision-Language Models (LVLMs) frequently suffer from hallucinations. Existing preference learning-based approaches largely rely on proprietary models to construct preference datasets. We identify that this reliance introduces a distributional mismatch between the proprietary and target models that hinders efficient alignment. To address this, we propose Alignment via VErified Self-correction DPO (AVES-DPO), a framework that aligns LVLMs using in-distribution data derived from the model's intrinsic knowledge. Our approach employs a consensus-based verification mechanism to diagnose diverse hallucinations and guides the model to self-correct, thereby generating preference pairs strictly compatible with its internal distribution. Extensive experiments demonstrate that AVES-DPO surpasses existing baselines in hallucination mitigation while requiring only 5.2k samples.

類似記事(ベクトル近傍)