There's a specific kind of frustration that comes from modern AI support. It feels less like customer service and more like being dropped into an old 80s adventure game — the kind where progress only happens if you guess the one exact action the system will accept.
You try OPEN DOOR. Nothing happens.
You try LOOK AROUND. Nothing happens.
You repeat the same action, hoping the game will respond differently this time.
I experienced this firsthand when OpenAI mistakenly locked my ChatGPT Plus account.
30 messages. No human.
At first, every reply was identical: "If you continue to experience issues or believe there's been a mistake, please reach out directly, and we'll gladly assist further." I did. Repeatedly. The response never changed.
After many attempts, I changed the subject line and opened a new case. This time, the tone shifted: "You don't need to take further action at this time — a support specialist will review your case and follow up as soon as possible." That sounded like escalation. Like the case had finally reached a human.
I waited. Nothing happened. No follow-up. No human. No explanation.
Each message came from names like "Mark" or "Jaime" — friendly-sounding, but clearly automated. No disclosure that this was AI. No option to reach a human. After more than 30 messages across multiple threads, I gave up and canceled my account.
This wasn't just poor UX. It was misleading by design. The system didn't merely fail to resolve the issue. It simulated progress — changing tone, implying escalation — without actually providing it.
Only a new email subject advanced the case. Replies reset the puzzle. The bot never acknowledged it was a bot. A human existed somewhere in the system, but only appeared if you triggered the right hidden condition. This isn't support. It's a puzzle game with an invisible flag.
The resolution myth
Some companies proudly say: "Our AI resolves 75% of support cases." But how many of those "resolved" cases are actually people who simply stopped trying?
That isn't resolution. It's abandonment.
Companies introduce AI support to reduce human workload — but the UX of the support system itself often breaks user trust long before a human ever gets involved.
A single clear line could have prevented the entire mess: "A human specialist will review your case. Response time is usually around seven days. Thank you for your patience." That's all users need: clarity, expectations, accountability.
Three things AI support must get right
Human support must exist. Even if delayed. Even if limited. AI should assist, not replace the endpoint.
AI must be a bridge, not a wall. If an issue can't be solved after a few attempts, the system should clearly state when and how a human will review it.
Don't pretend to be human. If it's AI, say so. Don't use human names like "Mark" or "Jaime." Don't promise follow-ups that never happen.
The lesson is broader than OpenAI. Any company using AI support needs to remember that automation does not remove the need for accountability.
Support isn't where trust begins. It's where trust is tested — or lost.
The company that taught the world how to talk to AI still hasn't figured out how to talk back to humans.