In early 2024, McDonald's made a quiet announcement: after three years of testing AI-powered drive-through ordering across more than 100 U.S. restaurants, the company was pulling the plug on its partnership with IBM. The technology would be removed by July 26.
The official explanation was polished corporate-speak about "exploring voice ordering solutions more broadly." Franchisees told a different story. BTIG analyst Peter Saleh reported that accuracy remained "in the low-to-mid 80% range" — well below the 95% needed to compete with human workers. The system struggled with accents and dialects. Operating costs ran high. At McDonald's Worldwide Convention in Barcelona, "all of the orders witnessed there were incorrect."
TikTok documented the chaos: the AI ordering 2,510 McNuggets for one confused customer, adding butter packets instead of water, interpreting simple requests as something entirely different. McDonald's had spent $300 million acquiring the companies that became this technology. The pilot lasted three years. It still couldn't reliably take a drive-through order.
This wasn't a technology failure. The AI worked — just not inside McDonald's. The drive-through touched operations (workflow redesign), HR (staffing models), legal (privacy lawsuits emerged), franchisee relations (who pays for failures?), customer experience (brand reputation), and IT (integration with legacy systems). No single department owned it. No single executive was accountable when it broke.
McDonald's isn't an outlier. It's a preview.
The Paradox: AI Disappearing from Job Descriptions as It Becomes Essential
In late 2025, career platform Ladders revealed a counterintuitive trend: explicit mentions of "AI skills" in job listings are falling across nearly every job category — even as employers expect fluency. Design and UX roles saw AI mentions drop from 56.7% in 2021 to 44.6% in 2025. Software engineering followed.
"It will be mentioned less and less in the same way that Microsoft Office isn't mentioned in job postings anymore," Ladders CEO Marc Cenedella told Business Insider.
The expectation hasn't disappeared. It's become assumed. Pipedrive CTO Agur Jõgi put it bluntly: "It's just like a ticket to the game."
This paradox — AI vanishing from formal requirements while becoming informally mandatory — signals something deeper than a hiring trend. AI doesn't just automate tasks. It dissolves the boundaries that define how companies organize work.
The Shadow AI Economy
In July 2025, MIT's Project NANDA released one of the most comprehensive studies of enterprise AI adoption. Based on 52 executive interviews, 153 leadership surveys, and analysis of over 300 public deployments, the findings were stark: according to the researchers, 95% of enterprise AI pilots delivered no measurable impact on profit and loss. Only 5% created significant value.
But the more interesting finding wasn't about corporate failure. It was about individual success.
While only 40% of companies have official LLM subscriptions, workers at over 90% report regular use of personal AI tools — ChatGPT, Claude, Copilot — for daily tasks. Most of this happens without IT approval. A "shadow AI economy" has emerged.
MIT captured the dynamic through interviews. A corporate lawyer at a mid-sized firm, whose organization had invested $50,000 in a specialized contract analysis tool, consistently defaulted to her personal ChatGPT subscription. "Our purchased AI tool provided rigid summaries with limited customization options," she told researchers. "With ChatGPT, I can guide the conversation and iterate until I get exactly what I need."
A $20-per-month consumer tool outperforming a $50,000 enterprise system. It's not that enterprise AI is technically worse. It's that organizations haven't figured out how to make AI work organizationally.
The Accountability Gap
Traditional automation replaced manual labor by encoding scripts. LLMs do something more disruptive: they amplify cognitive work across organizational boundaries. A single employee can now prototype tools, automate workflows, and redesign processes that previously required coordinated specialist teams.
This should be liberating. Instead, it's paralyzing.
Consider what McDonald's actually faced. The AI sat at the intersection of responsibilities the org chart assumed would never overlap. When accuracy dropped, who fixed it? When costs overran, who owned the budget? When a franchisee's customer got 2,510 McNuggets, who apologized — and who paid?
According to Harvard Business Review research, most AI initiatives fail because "organizations aren't built to sustain them." A 2025 S&P Global survey of enterprise leaders found 42% of companies abandoned most AI initiatives this year — up from 17% in 2024. The average enterprise scrapped 46% of proofs-of-concept before production.
The pattern repeats: AI works in isolation; AI fails at organizational interfaces.
Human-in-the-Loop Without a Human in Charge
Even as AI assumes tasks, organizations need humans to oversee, validate, and contextualize output. Human-in-the-loop (HITL) is now a legal requirement under the EU AI Act. But most companies have no idea how to structure it.
A U.S. health insurer automated claims processing with machine learning. The system was systematically rejecting out-of-network emergency claims due to misclassified provider types in training data. No one caught it — because no one's job was to catch it. Human adjudicators eventually identified the pattern, corrected the labels, and recalibrated the model. By then, the organization faced potential litigation.
The problem wasn't that humans were unavailable. It was that no one had defined whose job it was to watch the machine. The org chart assumed AI outputs were trustworthy by default.
Who Wins, Who Loses
The structural transformation AI demands will not be evenly distributed.
CIOs are losing relevance. When employees deploy AI faster than IT can evaluate it, the gatekeeper role breaks down. Microsoft found that 78% of AI-using workers bring their own tools — outside IT oversight. This isn't a security problem to be solved; it's a power shift to be managed.
One security team at a financial services firm discovered employees had been using ChatGPT for client communications for months — only after a compliance audit flagged unusual data patterns. IT hadn't approved it. IT hadn't even known. The governance framework assumed IT controlled what tools entered the workflow. That assumption was already obsolete.
Middle management is being compressed. AI synthesizes information, drafts communications, coordinates tasks — exactly what supervisors traditionally did. Research tracking 62 million workers found junior positions "shrinking at companies integrating AI." The effect is cascading upward.
Procurement and legal will become power centers. As AI vendor relationships become strategic, the teams that negotiate contracts and define acceptable use accumulate influence. MIT found organizations partnering with specialized vendors succeed twice as often as those building internally. Someone has to manage those partnerships.
Individual contributors with AI fluency gain leverage. The lawyer using ChatGPT instead of her company's $50,000 tool isn't violating policy — she's demonstrating that the org chart has become fiction. People who deliver results by orchestrating AI across boundaries will become indispensable, regardless of title.
The Right Question
The standard question — How many jobs will AI eliminate? — misses the point. Jobs aren't disappearing. They're being re-architected around boundaries the org chart doesn't recognize. The right question:
Can your organization handle a world where execution is democratized but accountability isn't?
McDonald's couldn't. The health insurer almost couldn't. According to MIT's data, most enterprises still can't. The shadow AI economy isn't a policy violation to police. It's a signal. Employees have already figured out how to cross the "GenAI divide" — with personal subscriptions and borrowed time, outside the org chart, because the org chart became an obstacle.
Organizations that can't redesign accountability will keep blaming AI for failures that are entirely human.
Unlock the Future of Business with AI
Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.