Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI Slop Is Drowning Your Company. Nate B Jones Has a Mop.

If your workplace feels like a dumping ground of mediocre AI-written memos, you’re not imagining it.

Mr Moonlight profile image
by Mr Moonlight
AI Slop Is Drowning Your Company. Nate B Jones Has a Mop.
Generated using AI

Nate B Jones has been sounding the alarm about “AI slop,” the flood of shallow, directionless content that fills business systems when teams can’t define what good looks like.

His message is simple and a little uncomfortable: the problem isn’t the model. The problem is your standards. In his video and newsletter, Jones argues that businesses are failing not because AI is dumb, but because their instructions are.

The Real Bottleneck: Clarity, Not Capability

AI has dropped the cost of business writing to almost zero, but that’s not the win it sounds like. As Nate points out in his full story and prompt breakdown, companies are drowning in AI-generated reports because they’ve mistaken volume for quality.

The real bottleneck is no longer how fast you can write. It’s how clearly you can articulate what you need.

Every vague instruction gets amplified by AI. If a spec or brief leaves room for interpretation, the model will fill it with confident nonsense. Nate calls this the specification bottleneck, where teams rely on instinct and “I’ll know it when I see it” judgment instead of explicit criteria.

His fix is practical: define concrete, testable quality criteria. Every piece of business writing should have standards that can be verified, not vibes that can be debated.

The Solution: Intent-Driven Writing

In Nate’s framework, “intent-driven” writing means every document must serve a goal. A report should enable a decision.

A memo should clarify trade-offs. If a reader can’t tell what the document is supposed to accomplish, it’s useless, no matter how polished it sounds.

He argues that AI forces teams to externalise their “tacit knowledge": the unspoken expectations that used to live in people’s heads. Once those expectations are written down as rules or prompts, AI can work with them. If they aren’t, you get generic filler text that sounds smart but says nothing.

Nate compares the new discipline of AI writing to product management: you’re not just creating words, you’re creating specifications for decisions. Each paragraph is part of the logic of the business, not a slot in a template.

Scale Evaluation, Not Just Generation

Most organisations, Nate says, have embraced AI for writing but ignored its potential for evaluation.

Everyone’s generating drafts, but almost no one’s checking them. He recommends using AI to run first-pass evaluations based on explicit quality checks before a human review.

In his prompt example, every decision must have a name, every action item an owner, and every open question a next step. If any requirement fails, the AI revises before sending the draft onward.

This structure turns “make it better” feedback into measurable, testable criteria. The result isn’t perfection—it’s consistency and sanity.

Failure Examples: Teach AI What Bad Looks Like

One of Nate’s smartest insights is to teach AI with examples of failure, not just success. If your press releases are too hyped, your memos too vague, or your technical docs too prescriptive, show those examples.

In his view, every team should maintain a “failure file” of bad documents to illustrate what to avoid. He insists that clarity about what bad looks like is often more useful than vague ideals of “good writing.”

The Voice Problem: Why AI Sounds Like Oatmeal

Nate also calls out the bland, “corporate oatmeal” tone that AI defaults to. The neutral, pseudo-professional voice flattens nuance and strips away conviction. Good writing, he says, must show range: where you can clearly mark what’s certain, what’s speculative, and what’s risky. Without that, AI writing becomes diplomatic sludge.

He recommends prompting AI to explicitly tag uncertainty, label assumptions, and differentiate confidence levels. Otherwise, your documents become summaries of summaries—technically correct but strategically useless.

How to Stop Drowning in AI Slop

Nate’s prescription for escaping the AI slop trap is both simple and demanding:

  1. Define intent for every document.
  2. Specify quality criteria that can be tested.
  3. Include examples of failure alongside success.
  4. Use AI for evaluation, not just generation.
  5. Make sure every output helps someone decide something.

He argues that this shift will make teams think more, not less. Instead of guessing what “good” means, they’ll have to decide—and document—it. That discipline, not the AI itself, is what separates productive organisations from the ones drowning in unread reports.

The Big Picture: AI Didn’t Break Business Writing, It Exposed It

As Nate puts it, “AI slop at work is killing businesses.” But the real story is that AI just surfaced problems that were already there. The difference now is that you can’t hide vague thinking behind “best effort” prose.

The machines will write exactly what you ask for... and if you don’t know what you’re asking for, that’s your problem.

For leaders, his warning lands hard: “The alternative to clarity is not what we had before. The alternative is AI slop forever.”

You can explore his frameworks, prompts, and playbooks on NateBJones.com or dive deeper into his Substack. If you’re tired of swimming in AI-generated mush, his advice is clear. Get specific, define intent, and start writing like you mean it.

Mr Moonlight profile image
by Mr Moonlight

Read More