Your Japanese localization is quietly killing user onboarding and retention. And it's not the translator's fault. It's the process most SaaS companies still use.

I was reading a Japanese article about a Toyota campaign powered by ElevenLabs. But it didn't make any sense. The sentences felt chopped up. Key context was missing. I had no idea who "Brock" was or what the campaign actually did.

So I checked the original English article. And suddenly everything clicked.

The Japanese was machine-translated. A whole paragraph was dropped, probably because of a hyperlink tag. Some lines were duplicated. And no human had touched it.

That's not just a translation issue. That's a broken user experience. And the sad part is: this wasn't an exception. It's happening everywhere.

What NMT actually does to your content

Neural Machine Translation is fast — but fragile. Most companies use NMT because it's cheap and fast. But that speed comes at a cost.

If tags or dashes appear in the wrong place, NMT can misread structure and lose entire sentences. Negations, idioms, and long noun phrases get mistranslated or dropped. Users have to work to understand what should be obvious. The tone feels off — too stiff, too casual, or just weird. Key terms are translated differently each time. And you still need people to fix all this, meaning slower workflows and more expensive reviews.

A real example: ElevenLabs × Toyota

Here is the original English:

"Toyota's Northern California Dealers Association and creative agency H/L launched a new kind of branded experience — a dynamic voice-driven activation hosted by an AI-powered version of 49ers quarterback Brock Purdy."

And here is what the machine-translated Japanese becomes when back-translated into English:

"Toyota's Northern California Dealer Association and [sentence cut off] — A conversational fan experience built on ElevenLabs' Agents platform, this experience offers fans a natural and interactive conversation with Brock."

The subject was cut off. Sentences were duplicated. Brock's context disappeared entirely. "ElevenLabs" and "platform" were mashed into one word in the Portuguese version.

If a machine can't reproduce the original meaning after translation, what are your users supposed to do with it?

What LLMs make possible instead

Large Language Models can do more — if you use them right. Instead of translating the original, you can have an LLM write the target-language content directly.

This lets you focus on intent rather than word-by-word fidelity, adapt to the target audience and their cultural context, stay true to your brand voice and tone, and reduce or eliminate the need for post-editing.

Feed the LLM your original text. Add a prompt with clear instructions: what the content is for, who it's for, what tone it should have. Review the output lightly. Done well, this produces better content faster than MT + post-editing.

How to decide

Does the reader need to understand and act on the content? If yes, use an LLM. If no — for example, compliance-driven text where completeness matters more than readability — MT may be sufficient, but only with expert human post-editing.

Do you care about the user experience? If users need to grasp intent, nuance, or tone, MT won't deliver.

What's the actual cost? Compare NMT plus expensive human post-editing against LLM plus light-touch QA. In many cases, the second option is faster, cheaper, and more reliable for user-facing content.

Stop translating. Start writing.

If you're using NMT for high-stakes, user-facing content, you're not automating. You're just creating more cleanup work.

Maybe it's time to stop translating — and start writing.