Nvidia Launches Enterprise Agent Toolkit at GTC 2026 — Adobe, Salesforce, and SAP Among 17 Launch Partners
Nvidia has unveiled the Agent Toolkit, an open-source platform for building and deploying autonomous AI agents in enterprise workflows. Seventeen major enterprise software companies have committed to integration at launch, including Adobe, Salesforce, and SAP — signaling that agentic AI has moved from research curiosity to production infrastructure.

D.O.T.S AI Newsroom
AI News Desk
Nvidia used its GTC 2026 conference to announce the Agent Toolkit, a new open-source platform designed to be the foundational layer for autonomous AI agent deployment in enterprise environments. The launch comes with commitments from seventeen major enterprise software vendors — a roster that reads as a who's-who of enterprise software: Adobe, Salesforce, SAP, ServiceNow, Oracle, and twelve others.
The breadth of the partner list is as significant as the technology itself. When three of the five most widely-deployed enterprise software platforms commit to a new infrastructure standard on day one, it typically signals that the underlying shift is real and the timing is now.
What the Agent Toolkit Does
The toolkit provides a standardized set of APIs, scheduling primitives, and memory management tools for agents that need to operate autonomously across enterprise software environments — accessing CRM records, executing code, managing files, calling external APIs, and coordinating with other agents. It is built on top of Nvidia's NIM microservices and CUDA runtime, meaning agents can be optimized for Nvidia hardware without requiring developers to write GPU-specific code.
The key architectural decision is the memory model. Agent Toolkit includes three tiers of agent memory: in-context working memory (ephemeral, fast), external vector store memory (persistent, searchable), and a new structured state store designed for long-running agent workflows that persist across days or weeks. The third tier is the most novel — most existing agent frameworks treat memory as either ephemeral or unstructured.
Why Now
Nvidia's timing is deliberate. The company has watched the agentic AI space fragment across dozens of incompatible frameworks — LangChain, CrewAI, AutoGen, and dozens of proprietary implementations from the major hyperscalers. Each framework makes different assumptions about how agents store state, call tools, and coordinate with each other. The result is an ecosystem that is technically vibrant but practically difficult to integrate into production enterprise workflows.
By releasing an open-source standard with significant adoption commitments before the framework war fully plays out, Nvidia is attempting to do what it did with CUDA for GPU computing: establish the default infrastructure layer that everything else builds on.
The Strategic Play
This is not purely altruistic. Every enterprise agent that runs on Agent Toolkit has hardware preferences that point toward Nvidia accelerators. The company is planting a flag in the agentic AI infrastructure layer at the moment when enterprises are beginning to move from AI experiments to AI operations — and when the hardware requirements for running large agent fleets at scale are becoming a material procurement decision.