Global Trend Radar
arXiv cs.LG (Machine Learning) INT ai 2026-04-27 13:00

主権的エージェントループ:現実世界のシステムにおけるAIの推論と実行の切り離し

原題: Sovereign Agentic Loops: Decoupling AI Reasoning from Execution in Real-World Systems

元記事を開く →

分析結果

カテゴリ
AI
重要度
69
トレンドスコア
28
要約
大規模言語モデル(LLM)エージェントは、実際のシステムを変化させるAPI呼び出しを行うことが増えているが、現在の多くのアーキテクチャは確率的モデルの出力を直接実行に渡している。
キーワード
arXiv:2604.22136v1 Announce Type: cross Abstract: Large language model (LLM) agents increasingly issue API calls that mutate real systems, yet many current architectures pass stochastic model outputs directly to execution layers. We argue that this coupling creates a safety risk because model correctness, context awareness, and alignment cannot be assumed at execution time. We introduce Sovereign Agentic Loops (SAL), a control-plane architecture in which models emit structured intents with justifications, and the control plane validates those intents against true system state and policy before execution. SAL combines an obfuscation membrane, which limits model access to identity-sensitive state, with a cryptographically linked Evidence Chain for auditability and replay. We formalize SAL and show that, under the stated assumptions, it provides policy-bounded execution, identity isolation, and deterministic replay. In an OpenKedge prototype for cloud infrastructure, SAL blocks 93% of unsafe intents at the policy layer, rejects the remaining 7% via consistency checks, prevents unsafe executions in our benchmark, and adds 12.4 ms median latency. arXiv:2604.22136v1 Announce Type: cross Abstract: Large language model (LLM) agents increasingly issue API calls that mutate real systems, yet many current architectures pass stochastic model outputs directly to execution layers. We argue that this coupling creates a safety risk because model correctness, context awareness, and alignment cannot be assumed at execution time. We introduce Sovereign Agentic Loops (SAL), a control-plane architecture in which models emit structured intents with justifications, and the control plane validates those intents against true system state and policy before execution. SAL combines an obfuscation membrane, which limits model access to identity-sensitive state, with a cryptographically linked Evidence Chain for auditability and replay. We formalize SAL and show that, under the stated assumptions, it provides policy-bounded execution, identity isolation, and deterministic replay. In an OpenKedge prototype for cloud infrastructure, SAL blocks 93% of unsafe intents at the policy layer, rejects the remaining 7% via consistency checks, prevents unsafe executions in our benchmark, and adds 12.4 ms median latency.

類似記事(ベクトル近傍)