Here is where this started.

A satirical post was reported as AI-generated slop. A LinkedIn employee thanked the reporter for the tip — without having read the post themselves. The reporter later admitted the post was not what they thought it was.

That exchange says more about the "AI slop" discourse than any definition could.

The problem with the term

I don't like the term "AI slop."

Not because the problem is not real. It is real. Feeds are full of low-effort, generic, mass-produced content.

But calling it "AI slop" puts the blame in the wrong place.

AI does not decide to publish. People do.

If someone uses AI to generate a shallow post, does not edit it, does not add context, does not check whether it says anything, and then publishes it for attention — that is not AI slop.

That is user slop.

Where the responsibility actually sits

The issue is not the tool. The issue is the absence of judgment, context, and ownership.

AI will not save you from yourself. A generator does not decide what is worth saying. It does not know whether the output serves anyone. It does not care whether it gets published.

The person who hits publish does.

When content fails — when it is hollow, generic, or manipulative — that failure belongs to the person who chose to release it. Blaming the tool is a way of avoiding that accountability.

What the dictionary says

Merriam-Webster recently added "slop" as a new sense: digital content of low quality, produced usually in quantity by means of artificial intelligence.

By means of. Not by.

Even the definition locates the agency correctly. AI is the means. The decision to produce and distribute is human.

This is part one of two. Part two looks at what happens when platforms — not users — are expected to solve the problem. Read: The Network Hygiene Problem →