Global Trend Radar
VentureBeat US tech 2026-05-08 16:00

ガバナンス、ゲートキーピングではなく:SAPがAI接続に企業レベルの安全性をもたらす方法

原題: Governance, not gatekeeping: How SAP brings enterprise‑grade safety to AI connectivity

元記事を開く →

分析結果

カテゴリ
IT
重要度
68
トレンドスコア
30
要約
SAPは、企業ソフトウェア業界の根本的な変化に対応し、顧客をより良く保護するためのアプローチを適応させています。グローバルプラットフォームベンダーは、AI接続において企業レベルの安全性を提供するために、ガバナンスを重視し、顧客の信頼を築くことに注力しています。
キーワード
Presented by SAP The enterprise software industry has undergone a fundamental shift, and vendors are adapting their approaches to better protect the customers who rely on them. For years, every global platform vendor running multi-tenant cloud infrastructure has maintained documented rate limits, usage controls, and restrictions on the use of undocumented internal interfaces. CRM platforms impose daily API call limits per organization, enforce platform-layer limits, and maintain a strict separation between bulk data APIs and transactional REST surfaces. Productivity and collaboration suites throttle their graph APIs and redirect bulk workloads to purpose-built data access channels designed for that load. HR and workforce management platforms enforce concurrent request limits and per-session data retrieval caps. IT service management platforms enforce per-user rate limits and instance-level throttling. Hyperscalers publish per-service quotas, enforce them at the infrastructure layer, and explicitly prohibit applications from calling non-SDK or non-published interfaces. These are not controversial measures. They are baseline hygiene for enterprise-grade software platforms operating shared infrastructure at scale. For more than a decade these measures have been in place without serious objection. As SAP has taken responsibility for securing customers' mission-critical workloads in the cloud, a unified API policy with clarified usage controls is not a restriction but the expression of enterprise-grade stewardship. Some have read the policy as a new restriction. The policy does not introduce new restrictions. It names and unifies controls that have existed across individual SAP products for years. SAP is not introducing API governance as a novel concept. SAP SuccessFactors , SAP Ariba , SAP LeanIX , and several other SAP solutions have enforced documented rate limits and usage controls. SAP Notes and SAP’s documentation have also in the past defined API usage. What the recent policy does is unify that existing practice into a single cross-portfolio standard, a step made urgent by the arrival of autonomous agentic harnesses that SAP is fully committed to enabling, but which place a categorically different performance, stability, and security load on API surfaces that were never designed for autonomous orchestration and data extraction at scale. Custom interfaces: What SAP’s API policy does and does not restrict Custom APIs built by customers in their own namespace for their own extensibility, integration, and migration purposes are customer-developed interfaces. If you have spent years building custom data services, custom RFCs, and ABAP interfaces to connect your SAP system to the world around it, the policy's restriction on non-published APIs might read, on first encounter, like a demolition order. It is not. The policy's restriction targets SAP's own internal unreleased objects. It does not reach into the Z namespace and condemn two decades of ABAP engineering. SAP’s Private Cloud customers are in a distinctly privileged position compared with much of the enterprise world, because they have long been able to build in their own namespace and to shape an environment they were free to modify and extend, and that freedom is not being revoked. The policy is focused on something narrower: SAP’s own internal interfaces that were never published, never documented for customer use, and never offered as a dependable foundation for integration. Most custom code never touches these internals and will continue untouched; where it does, the risk for customers has always been present, and the policy merely names it rather than inventing it. However, within that set there is a smaller class of interfaces that is not a matter for debate but for prohibition. ODP-RFC belongs in that class: it sits in SAP’s namespace as an internal, non-released interface that SAP explicitly classifies as “unpermitted” for customer or third-party application use as documented in SAP Note 3255746 . These are precisely the kinds of interfaces SAP will flag as prohibited in notes and automated tooling so that such usage can be identified early through tooling and guidance, rather than discovered late in deployment or operational context. Clean Core is distinct from the API Policy but points in the same direction, and it bears noting that customers did not merely accept it but asked for it repeatedly, having lived through the upgrade costs of the alternative; in the agentic era, where SAP runs mission-critical ERP as a service, both the Clean Core Recommendations and API Policy are conditions of the enterprise-grade reliability that cloud operations make possible. How AI agents change API usage patterns in SAP systems While some commentators have argued this policy is primarily a commercial move, the technical evidence tells a different story. AI has changed everything about our traditional view of transactional interfaces. The APIs that enterprises have used for decades to integrate SAP systems with third-party applications are request-response interfaces built for transactional workloads. They were designed to fetch a sales order, post a goods receipt, or trigger a payment run. They were designed to be mostly called by a human-authored integration flow, at a predictable frequency, for a defined business purpose. They were not designed to have an autonomous AI orchestration harness run thousands of sequential calls against them in pursuit of semantic context about the business model encoded within. That is not a clean core integration pattern. Much of the debate misses a core architectural distinction. A traditional integration tool reads a sales order from SAP, converts it into the format a target schema needs, and moves it on. SAP's data model plays no role beyond being a transient interpretation step. An AI agent does something categorically different. It does not merely retrieve a value. It reads the sales order header data and learns that this structure represents a customer commitment to buy. It reads the line item data and learns how individual items relate to that order. It reads the net value and learns that this number is meaningful only when paired with the document currency. It traces the path that a sales order takes through delivery, billing, and finally into the accounting ledger, and internalizes how SAP reconciles operations and finance within its business object model. The agent is not only consuming a customer's transactional data. It is consuming the semantic ontology: the business object definitions, the relationships between entities, the conceptual architecture that SAP has built and refined over five decades of enterprise knowledge encoding. SAP has long distinguished between enabling transactional access to customer data and the broader extraction or replication of the underlying ontology. The policy does not create this boundary, because it already existed. Autonomous agents must continue to respect that boundary, rather than redefine it. Security risks in third-party MCP implementations Then there is a security angle, and it is not abstract. The same week this policy was published, a supply chain attack named the Mini Shai-Hulud - a variant of the npm worm, quietly compromised hundreds of software packages. SAP-ecosystem npm packages were compromised and we addressed this with this security note for customers . This is not a theoretical threat model. This is the active threat environment in which community-built MCP servers are being connected to productive SAP systems running mission-critical business processes. The OWASP MCP Top 10 documents the vulnerability classes systematically: tool poisoning, prompt injection, privilege escalation via scope creep, token mismanagement, and supply chain compromise. Recent research across thousands of analyzed MCP implementations shows that a majority operate with static long-lived credentials or carry identifiable security findings, and a single compromised package in the MCP ecosystem can cascade into hundreds of thousands of exposed development environments. VentureBeat just last week reported a serious com.mand execution flaw that made up to 200,000 MCP servers vulnerable. Consider what that means in practice. An AI agent that has just internalized the semantic structure of your SAP data model and is operating through a community MCP server, moves beyond a productivity tool and into an elevated risk category, one that combines broad system access with an attack surface that is still evolving. Why MCP alone cannot run SAP business processes The MCP debate has also obscured a technical reality that enterprise architects need to confront directly. The Model Context Protocol is plumbing. It specifies how an AI model calls a tool. It says nothing about whether the model understands what the tool does in a business context, in what sequence tools must be called, what side effects a given API invocation will trigger, or what the consequences of an incorrect parameter will be. A naive MCP implementation connecting to SAP OData services can call a tool. It cannot run a business process. The token consumption data from production agentic deployments is instructive. For illustration, a query asking for an employee's manager and traversing through the list of peers in an SAP SuccessFactors system consumed 565,000 tokens under a standard MCP implementation. The same query under a context-aware implementation consumed 80,000 tokens. That is the difference between a query costing $1.70 and a query costing $.24, for example, on a single operation, repeated across thousands of daily transactions. The standard MCP implementation is not automation. It is an expensive approximation of automation that fails on complex queries while loading the API surface with traffic it was not designed to carry. SAP’s architecture for open third-party AI integration via A2A SAP's response to these challenges is not to