AI slop is annoying. But not every annoying post needs to become a moderation problem.
When a satirical post gets reported as AI-generated content — and a platform employee thanks the reporter without reading it — that is not a content quality problem. That is a judgment problem. And platforms cannot fix judgment.
Your feed is not neutral
If your feed is full of low-effort posts, the first question may not be "why doesn't LinkedIn remove this?" It may be "who did I invite into my network?"
LinkedIn is not a neutral stream. It is shaped by your connections, follows, reactions, and the people your network amplifies.
If you are connected to 3,000 people you do not know, your feed will reflect the judgment of 3,000 people you do not know. Some of them will like shallow posts. Some will repost engagement bait. Some will amplify AI-generated filler.
That is not necessarily a platform safety issue. It is also a network hygiene issue.
The hunt for suspected tools
The problem with "AI slop" discourse is that it can quickly become a hunt for suspected tools instead of a discussion about quality, trust, and distribution.
Bad posts existed before generative AI. AI just made them cheaper to produce.
Soon, most content will be AI-backed in some way. The useful distinction will not be AI vs. human. It will be judgment vs. no judgment.
An AI-assisted post written with care, edited for clarity, and published because it has something to say is not slop. A human-written post that is hollow, manipulative, and published for attention is.
What actually helps
The answer is not to turn users into AI police.
It is to look at behavior, distribution, and incentives. And on the user side, it is to be more intentional about who gets access to your feed.
Your feed is a reflection of your choices. Curate accordingly.