Global Trend Radar
arXiv cs.AI INT ai 2026-04-28 13:00

大規模言語モデルは本当にあなたの名前を認識できるのか?

原題: Can Large Language Models Really Recognize Your Name?

元記事を開く →

分析結果

カテゴリ
AI
重要度
85
トレンドスコア
34
要約
大規模言語モデル(LLM)は、プライバシーパイプラインでの機密データ漏洩の検出と修正にますます使用されています。これらのソリューションは、しばしば事前に訓練されたモデルに依存しています。
キーワード
長期重要性
数年で重要
ビジネス可能性
高い(プライバシー保護技術の需要増加)
日本波及可能性
高(日本でもプライバシー保護の重要性が増しているため)
arXiv:2505.14549v3 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly being used in privacy pipelines to detect and remedy sensitive data leakage. These solutions often rely on the premise that LLMs can reliably recognize human names, one of the most important categories of personally identifiable information (PII). In this paper, we reveal how LLMs can consistently mishandle broad classes of human names even in short text snippets due to ambiguous linguistic cues in the contexts. We construct AmBench, a benchmark of over 12,000 real yet ambiguous human names based on the name regularity bias phenomenon. Each name appears in dozens of concise text snippets that are compatible with multiple entity types. Our experiments with 12 state-of-the-art LLMs show that the recall of AmBench names drops by 20--40% compared to more recognizable names. This uneven privacy protection due to linguistic properties raises important concerns about the fairness of privacy enforcement. When the contexts contain benign prompt injections -- instruction-like user texts that can cause LLMs to conflate data with commands -- AmBench names can become four times more likely to be ignored in Clio, an LLM-powered enterprise tool used by Anthropic AI to extract supposedly privacy-preserving insights from user conversations with Claude. Our findings showcase blind spots in the performance and fairness of LLM-based privacy solutions and call for a systematic investigation into their privacy failure modes and countermeasures. arXiv:2505.14549v3 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly being used in privacy pipelines to detect and remedy sensitive data leakage. These solutions often rely on the premise that LLMs can reliably recognize human names, one of the most important categories of personally identifiable information (PII). In this paper, we reveal how LLMs can consistently mishandle broad classes of human names even in short text snippets due to ambiguous linguistic cues in the contexts. We construct AmBench, a benchmark of over 12,000 real yet ambiguous human names based on the name regularity bias phenomenon. Each name appears in dozens of concise text snippets that are compatible with multiple entity types. Our experiments with 12 state-of-the-art LLMs show that the recall of AmBench names drops by 20--40% compared to more recognizable names. This uneven privacy protection due to linguistic properties raises important concerns about the fairness of privacy enforcement. When the contexts contain benign prompt injections -- instruction-like user texts that can cause LLMs to conflate data with commands -- AmBench names can become four times more likely to be ignored in Clio, an LLM-powered enterprise tool used by Anthropic AI to extract supposedly privacy-preserving insights from user conversations with Claude. Our findings showcase blind spots in the performance and fairness of LLM-based privacy solutions and call for a systematic investigation into their privacy failure modes and countermeasures.

類似記事(ベクトル近傍)