On Monday, the Linux Foundation announced the Agentic AI Foundation, and the press releases hit all the right notes: "neutral governance," "open standards," "community-driven innovation." Anthropic donated its Model Context Protocol. OpenAI contributed AGENTS.md. Block threw in goose. Nearly every major tech company signed on as a member. The stated goal is to ensure AI agents "evolve transparently and collaboratively."
It sounds like the industry coming together in the spirit of open source. But look closer at the timing, the membership, and what's actually being standardized, and a different picture emerges. This isn't cooperation breaking out — it's consolidation. The poured concrete is the set of assumptions that will define what tomorrow's AI systems are allowed to be.
Why Agents Are the Battleground
To understand why this matters, you need to understand the shift that's coming. AI is moving from systems that respond to prompts to systems that take independent action — browsing the web, writing and executing code, managing files, booking travel, coordinating with other AI systems. The marketing term is "agentic AI," and it represents a fundamental change in how humans interact with software. Instead of clicking through interfaces designed for human eyes, you delegate tasks to an agent that decides how to accomplish them.
If this vision materializes, agents become the new interface layer between humans and the digital world. And whoever controls the protocols agents use — how they connect to services, how they understand context, how they coordinate — controls the plumbing of that future. That's what's being standardized now.
What's Actually Being Standardized
The three donated projects form a complete stack, each addressing a different layer of the agent problem.
MCP (Model Context Protocol) defines how AI agents connect to tools, databases, and external services. Think of it as the USB-C of the agent era: a universal connector that lets any agent talk to any service that implements the protocol. Before MCP, connecting an AI model to, say, a database or a code repository meant writing custom integration code. MCP standardizes that interface, making it theoretically possible to build once and connect anywhere.
MCP isn't the only approach to this problem. LangChain has its own tool specification; the open-source Ollama project has tool conventions for local models; E2B offers sandboxed code execution environments with their own API patterns. But MCP has achieved escape velocity: over 10,000 published MCP servers, adoption by Claude, ChatGPT, Gemini, Cursor, VS Code, and Microsoft Copilot. Alternatives aren't disappearing, but they're now defining themselves relative to MCP instead of competing head-to-head.
AGENTS.md is deceptively simple: a markdown file that sits alongside README.md in a code repository, telling AI coding tools how to behave in that specific codebase. It specifies coding conventions, build steps, testing requirements, and other project-specific guidance that helps agents work safely and predictably. Developers embraced it quickly — over 60,000 projects in four months — because it aligns with how AI coding tools already infer context. AGENTS.md just gives them a predictable, canonical place to look.
goose is Block's open-source agent framework, and it's the most concrete of the three contributions. Where MCP defines how agents connect and AGENTS.md defines how they understand context, goose provides an opinionated architecture for how agents actually run: managing sessions, orchestrating tool calls, handling multi-step workflows with error recovery. Thousands of Block engineers use it weekly for coding, data analysis, and documentation, according to Block. By contributing it to AAIF, Block is offering a working reference implementation — proof that the standards aren't just theoretical.
Together, these three projects answer the fundamental questions: How do agents talk to services? How do agents understand project context? And how do you actually build one? It's not a complete answer to the agentic future, but it's enough to establish the rules of the road.
The HTTP Comparison Is Both Apt and Misleading
Block's press release explicitly invokes the W3C, expressing hope that AAIF becomes "what the W3C is for the Web: a set of standards and protocols that guarantee interoperability, open access, and freedom of choice." Jim Zemlin, the Linux Foundation's executive director, frames it as avoiding a future of "closed wall proprietary stacks."
The analogy to HTTP and the early web is instructive, but not quite in the way the foundation intends. HTTP emerged from academic research at CERN, was refined through IETF processes over years, and became a standard through broad, sometimes contentious consensus before commercial interests dominated the web. The path wasn't always smooth — anyone who remembers the browser wars or the SOAP-versus-REST debates knows that standardization can be ugly — but the process was at least notionally open to participants beyond the market leaders.
MCP and AGENTS.md, by contrast, were created by commercial AI labs, adopted rapidly by those labs' products and partners, and are now being institutionalized at the moment they achieve market dominance. The standards bodies of the 1990s web had their flaws, but they weren't founded by the same firms that already controlled the vast majority of the stack they were standardizing.
The difference matters. HTTP became the standard because it was the best available option in an open field. MCP is becoming the standard not because it emerged organically, but because the companies with the market power to define reality decided to orbit it. One is consensus; the other is coordination. The outcome — interoperability — might look similar, but the power dynamics are inverted.
History offers cautionary examples of premature or poorly-governed standardization. Remember WAP, the wireless protocol that was supposed to bring the web to mobile phones a decade before the iPhone made it irrelevant? Or OpenDoc, Apple's ambitious component architecture that IBM and others joined before it collapsed under its own complexity? Or CORBA, the "universal" middleware that became a case study in standards-by-committee dysfunction? Standards that arrive too early, or that reflect political compromises rather than technical consensus, can entrench flawed designs as easily as they enable interoperability.
The Timing Tells the Real Story
Why now? MCP is twelve months old. It has known issues with its OAuth implementation — not conceptual gaps, but incomplete specifications that leave security-critical details to individual implementers. There are active proposals to address this, but by normal open-source foundation standards, the protocol is premature for institutionalization. Kubernetes ran in production for years before the CNCF took stewardship. PyTorch had extensive real-world deployment before its foundation transition. MCP skipped that phase.
The day before the AAIF announcement, Zemlin gave a keynote in Tokyo where he reportedly cited analysis suggesting Chinese open-weight models now trail frontier American labs by only three to six months. The underlying analysis wasn't published, but the implication was hard to miss: for most commercial applications, that gap is negligible. If American labs wait for agentic protocols to mature organically, standards from elsewhere might emerge first.
Look at the membership list: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, OpenAI, Cisco, IBM, Oracle, Salesforce, SAP, Snowflake. The founding platinum members — those paying $350,000 for governance privileges — are uniformly American. This isn't just open governance; it's a perimeter. The pattern echoes earlier standards-body maneuvers — think of how mobile carriers coordinated around GSM, or how early browser vendors battled for influence over HTML extensions. AAIF is establishing American-controlled standards for agent interoperability before alternatives can gain traction.
This doesn't make the standards bad. MCP is technically sound and genuinely useful. But framing this move as purely about "transparency" obscures the strategic calculus behind it.
What's Not Being Standardized
The AAIF standardizes the plumbing: how agents connect to services, how they understand project context, how they coordinate. What it doesn't standardize is equally telling.
There's no economic layer. No protocol for how agents pay for services they invoke, handle micropayments for API calls, or negotiate access terms. If agents are going to transact autonomously — booking flights, making purchases, paying for compute — someone will need to define how that works. AAIF punts on this for now.
There's no identity framework. How does a service verify that an agent is authorized to act on behalf of a particular user? How do agents prove their own identity to other agents? MCP has some OAuth integration, but it's incomplete. There's no standard credential format for agent identity, no protocol for capability scoping ("this agent can read my calendar but not send emails"), no mechanism for revoking agent access across services.
There's no safety standard beyond whatever security properties are baked into MCP's auth flow. Nothing about rate limiting to prevent agents from overwhelming services. Nothing about audit trails for agent actions. Nothing about kill switches or containment when an agent behaves unexpectedly.
These gaps aren't oversights — they're scope limitations. The foundation is establishing interoperability at the technical layer while leaving the economic, identity, and safety layers to be fought over later. Whether that's wisdom or deferral depends on whether you think those battles will be easier or harder once the plumbing is set.
The Skeptic's Counterargument
Not everyone is celebrating. Gartner analysts have publicly suggested many agent-based enterprise projects will be cancelled for lack of business value — their prediction is that by 2028, fewer than 40% of enterprises that deploy AI agents will see productivity gains. Microsoft reportedly reduced growth targets for its Azure Foundry agent product, though the company disputed characterizations that overall AI quotas were lowered. The infrastructure may be maturing faster than the demand for it.
There's also the "logo alliance" concern: will AAIF become real infrastructure, or just another industry consortium that issues press releases while member companies pursue proprietary advantages? The Linux Foundation's track record is strong — it successfully stewards Linux, Kubernetes, Node.js, and dozens of critical projects — but foundation membership doesn't guarantee companies will actually cede control. Google's participation in open standards historically hasn't stopped it from prioritizing proprietary advantages in its own products.
And there's a deeper question: is standardization at this stage even possible? LLM capabilities are advancing so rapidly that protocols designed for today's agents might be obsolete within eighteen months. If GPT-5 or Claude 4 introduces capabilities that don't fit MCP's model of tool invocation, does the standard adapt or become a constraint? Locking in standards now could entrench suboptimal designs as easily as it enables interoperability.
What's Actually at Stake
The real significance of the AAIF isn't about any individual protocol. It's that the industry has collectively decided that the agentic era is close enough to warrant coordination. Companies that were direct competitors last year on everything from model architecture to safety approaches are now agreeing to share infrastructure. That doesn't happen unless everyone believes the transition is imminent and the cost of fragmentation is higher than the cost of cooperation.
For developers, this is probably good news. Building once and running across platforms is better than building vendor-specific integrations. For enterprises evaluating agent adoption, foundation governance provides assurance that these protocols won't be deprecated arbitrarily.
For everyone else — the users who will live with these systems but never configure them — the implications are murkier. If AI agents become the primary interface to the internet — browsing, purchasing, communicating on our behalf — then whoever controls the protocols agents use has enormous power. The AAIF's "neutral governance" is better than outright corporate control, but it's still a small club making decisions that will affect billions of users who never signed up for the agentic future in the first place.
Concrete Sets Fast
The metaphor of "pouring concrete" is deliberate. Foundation work happens before the building goes up, and it constrains everything that follows. You can renovate the upper floors, but you're stuck with where the load-bearing walls go.
The AAIF is laying those foundations now. Maybe the timing is right — AI agents are developing fast, and coordination before fragmentation is genuinely valuable. Maybe it's premature — locking in protocols before the use cases mature could entrench suboptimal designs. Probably it's both, in different ways for different pieces.
What's certain is that this is a declaration of intent. The companies building AI have decided the agentic era is coming soon enough to standardize, and they've chosen to do it together, under their collective control, before the field gets more crowded. Whether you call that open governance or coordinated positioning depends on your priors.
Either way, the concrete is setting. We'll spend the next decade arguing about the upper floors of the agentic stack, but the load-bearing walls are being chosen right now — and not by the people who will have to live inside them.
Unlock the Future of Business with AI
Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.