大規模言語モデルにおけるプロンプトインジェクション防御の評価
原題: Evaluation of Prompt Injection Defenses in Large Language Models
分析結果
- カテゴリ
- 地政学
- 重要度
- 68
- トレンドスコア
- 27
- 要約
- LLMを活用したアプリケーションは、システムプロンプトに秘密を埋め込むが、モデルはそれを明らかにするように騙されることがある。本研究では、戦略を進化させる適応型攻撃者を構築した。
- キーワード
arXiv:2604.23887v1 Announce Type: cross Abstract: LLM-powered applications routinely embed secrets in system prompts, yet models can be tricked into revealing them. We built an adaptive attacker that evolves its strategies over hundreds of rounds and tested it against nine defense configurations across more than 20,000 attacks. Every defense that relied on the model to protect itself eventually broke. The only defense that held was output filtering, which checks the model's responses via hardcoded rules in separate application code before they reach the user, achieving zero leaks across 15,000 attacks. These results demonstrate that security boundaries must be enforced in application code, not by the model being attacked. Until such defenses are verified by tools like Swept AI, AI systems handling sensitive operations should be restricted to internal, trusted personnel. arXiv:2604.23887v1 Announce Type: cross Abstract: LLM-powered applications routinely embed secrets in system prompts, yet models can be tricked into revealing them. We built an adaptive attacker that evolves its strategies over hundreds of rounds and tested it against nine defense configurations across more than 20,000 attacks. Every defense that relied on the model to protect itself eventually broke. The only defense that held was output filtering, which checks the model's responses via hardcoded rules in separate application code before they reach the user, achieving zero leaks across 15,000 attacks. These results demonstrate that security boundaries must be enforced in application code, not by the model being attacked. Until such defenses are verified by tools like Swept AI, AI systems handling sensitive operations should be restricted to internal, trusted personnel.