Workslop

The AI slop bucket overflow: “Workslop” is the hidden productivity drain no one’s measuring

There's a new term making the rounds in corporate America, and it perfectly captures a frustration that's been building since ChatGPT entered the workplace: workslop. It's the AI-generated equivalent of that colleague who forwards you a 47-slide PowerPoint deck that somehow says nothing at all, except now it's happening at machine speed, in every department, and often with management's blessing.

The phenomenon is so pervasive that it has its own economics. According to new research from BetterUp Labs and Stanford's Social Media Lab, 40 percent of US workers have received workslop from their peers in the past month. Each instance takes an average of nearly two hours to untangle—time spent identifying errors, filling gaps, and generally doing the work that should have been done in the first place. For a 10,000-person company, researchers estimate that adds up to roughly $9 million in lost productivity annually.

But the financial toll may be the least of it. The real damage is happening at the level of trust and collaboration.

The trust tax of invisible AI

Here's the uncomfortable truth that global research from Melbourne Business School has uncovered: Two-thirds of employees who use AI at work have relied on its output without actually evaluating it. They've hit "generate," skimmed the results, and clicked send.

Just this month, consulting giant Deloitte Australia issued a formal apology after delivering a $440,000 report to the federal government riddled with AI-generated errors. It's the corporate equivalent of turning in a term paper you didn't read—except the stakes involve taxpayer money and public policy.

The Melbourne Business School study, which surveyed over 32,000 workers across 47 countries, reveals that workslop damages more than schedules. Recipients of AI slop don't just waste time fixing errors—they update their professional assessment of the sender. People who deliver workslop are perceived as less reliable, less creative, and less trustworthy. In essence, AI becomes not a productivity enhancer but a credibility hazard.

Perhaps most troubling: the majority of workers are hiding their AI use. Sixty-one percent avoid revealing when they've used AI, and 55 percent pass off AI-generated material as their own work. When errors slip through, there's no paper trail showing where the human verification should have happened.

The root cause: We're using AI for the wrong things

At its core, the workslop problem stems from a fundamental misalignment between what AI is good at and what we're asking it to do. Large language models excel at pattern matching, summarization of existing information, and generating variations on themes. They're remarkably fluent but fundamentally derivative.

The technical reality makes this worse. LLMs don't "know" anything—they predict statistically likely next tokens based on training data. This architecture creates specific failure modes: hallucinated citations, confidently incorrect assertions, and outputs that sound authoritative while being substantively hollow. The models are essentially sophisticated autocomplete engines optimized for plausibility, not accuracy.

Add to this the growing trend of prompt inflation—users stuffing prompts with instructions to make AI sound "more professional" or "more detailed"—and you get outputs that are simultaneously longer and less meaningful. More words, less signal.

The problem arises when we deploy these tools for tasks requiring original thinking, contextual judgment, or domain expertise—then fail to critically evaluate the results. AI doesn't "understand" your project's constraints, your audience's needs, or the subtle political dynamics of your organization. It produces statistically plausible text, which is not the same as meaningful content.

The Melbourne researchers found that employees often skip a crucial first step: asking whether AI is actually the best tool for a given task. Instead, there's a rush to automation, a sense that using AI is inherently innovative or efficient, regardless of context.

"AI isn't replacing your job—it's replacing your standards."

Fixing the workslop problem

The good news is that workslop isn't an inevitable consequence of AI tools—it's a consequence of how people use them. That means the problem is solvable, though it requires effort at both individual and organizational levels.

For individual workers, the fix involves three straightforward steps. First, interrogate whether AI is actually appropriate for the task at hand. If you can't explain or defend the output it generates, don't use it. Second, treat AI output as a draft requiring editorial review. Check facts, test code, and tailor content to your specific context and audience. Third, when stakes are high, be transparent about your AI use and what you verified—signal rigor rather than trying to hide the seams.

For organizations, the solution requires investment in what researchers call "governance, AI literacy, and human-AI collaboration skills." That's consultant-speak for: figure out your AI strategy, train your people properly, and stop treating AI adoption as a free-for-all.

Some companies are already setting precedents. Microsoft now requires explicit human sign-offs for any AI-generated customer communications. Others have established "AI review boards" that audit high-stakes outputs before they leave the building. These aren't bureaucratic hurdles—they're accountability mechanisms designed to catch workslop before it metastasizes.

The Melbourne Business School study found that AI literacy and training are associated with more critical engagement and fewer errors. Yet less than half of employees report receiving any training or policy guidance. Companies are effectively handing workers powerful new tools without instruction manuals, then wondering why things go sideways.

Effective governance means spelling out when AI is and isn't appropriate, establishing accountability for outputs, and tracking outcomes. It means identifying high-value use cases rather than applying AI indiscriminately. And critically, it means building AI literacy alongside policies—because rules without understanding just create creative workarounds.

The collaboration crisis

Perhaps the most concerning finding in the research is that half of surveyed workers use AI instead of collaborating with colleagues. This isn't just about efficiency—it's about the slow erosion of organizational capability.

When you ask ChatGPT instead of checking with your teammate, you lose the opportunity for dialogue, refinement, and collective problem-solving. You also lose the relationship-building that happens through working together. The knowledge that gets encoded through collaborative work—the shared context, the institutional memory, the trust networks—doesn't transfer to AI interactions.

"If AI adoption means less human collaboration, we may be optimizing for short-term individual productivity at the expense of long-term organizational capability."

This is where workslop reveals itself as more than an annoyance—it's a symptom of a deeper shift in workplace dynamics. We're trading human coordination for algorithmic isolation, then acting surprised when outputs lack the coherence that emerges from genuine collaboration.

The path forward

The workslop crisis is ultimately a maturity problem. We're in the awkward adolescence of workplace AI adoption, where the technology is widely available but best practices haven't caught up. Companies rushed to integrate AI tools without the supporting infrastructure of training, governance, and cultural adaptation.

The solution isn't to abandon AI—when used appropriately with proper oversight, research shows it can genuinely enhance performance. Rather, it's to develop what we might call "AI discernment": the ability to recognize which tasks benefit from AI assistance and which require human judgment.

That means treating AI as a collaborative tool requiring active human partnership, not a "set it and forget it" automation solution. It means being honest about the limitations of current systems, even as we explore their possibilities. And it means acknowledging that verification isn't optional overhead—it's the core of the job.

The current trajectory is unsustainable. Either we develop more rigorous practices around AI use, or we'll continue drowning in AI-generated busywork while wondering why our productivity gains never materialized.

AI isn't drowning us in slop because it's powerful—it's drowning us because we're lazy editors. Until that changes, the overflow will continue.

Unlock the Future of Business with AI

Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.

Scroll to top