I've watched three "this-will-replace-translators" waves. Neural machine translation in 2016. Statistical machine translation in the 2000s. Large language models since 2022. Each one has moved the line but none have erased it.
The question I keep getting asked is whether AI has finally closed the gap on cultural nuance in global marketing and it hasn't. Instead, AI has replaced work that didn't need cultural judgement in the first place. That makes cultural expertise more valuable, not less, and getting it wrong more expensive than ever.
In July 2024, Apple released a film in its "Underdogs" series, depicting Bangkok through a dated lens: sepia-graded streets, a rundown hotel, a chaotic cab ride. Thai viewers saw their country reduced to a quirky backdrop for the antics of westerners. The hashtag campaign escalated to parliament within days, with an MP calling on people from Thailand to switch away from Apple products. Apple pulled the film and apologised within a week, admitting it had failed to "fully capture the vibrancy of Thailand today."
In August 2025, Swatch ran an image of an Asian model in a pose evoking a racial stereotype, which went live across its international channels before anyone with local cultural knowledge had reviewed it. I calculated the financial damage to Swatch at approximately CHF 470 million, accounting for the market cap hit, lost Chinese sales, brand equity erosion and remediation costs.
By enabling more output across more markets with less oversight, AI is raising the chance of these cultural missteps happening at scale. Both of these were human misjudgements, not AI. But it's exactly these risks and potential financial damages nobody puts in the AI cost-saving slide.
For lower-context, repeatable content like internal documentation, newsletters, functional emails and product specs, AI translation is now strong and predictable, particularly in Latin-script languages. An Appen study published in 2026, testing seven leading language models across 15 language-locale combinations, confirmed what we see in practice at Freedman: fluency is no longer the problem. Cultural nuance still is.
What hasn't changed is the crucial work that senior transcreators do: market-entry brand voice, judgement calls on sensitive topics, the cultural sub-text that decides whether a campaign resonates, offends, or gets ignored. It is also exactly where AI fails, not because the models aren't improving, but training data can't move at the speed of culture.
Cultural nuance in global marketing is more than just "AI got an idiom wrong."
Cultural nuance is what stops a wellness brand from using imagery that reads as aspirational in California and clinical in Seoul. It's the judgement call that tells you a Nordic consumer would find the word "exclusive" off-putting whereas an Italian consumer would respond positively. It's knowing that the humour anchoring your UK campaign reads as flippant in Germany, and the wry tone your US script relied on reads as overfamiliar in Japan.
These judgements require specialist knowledge, they require the expertise senior transcreators build over decades of experience in a specific market. Their read on a cultural shift that emerged last quarter, the dialect call that signals authenticity in one city and condescension in the next.
There's evidence that AI can't close this gap on its own. Models will keep improving and prompts will keep getting better. But human culture is constantly evolving.
At a recent roundtable we hosted with senior marketers from Monzo, Meta, Twitch and Experian, the Coca-Cola AI Christmas ad came up. Industry criticism was sharp but consumer testing was positive. A century of brand love fills in the gaps. Consumers saw the missteps but nostalgia overrode them.
A fast-growing brand entering its fourth or fifth market has no such reserve. The mistake that Coca-Cola absorbs in one news cycle, an emerging brand pays for in trust, customer acquisition cost, and increasingly, regulatory exposure. It's not brand polishing, you're not paying to make the good campaigns slightly better, you're paying to stop the bad campaigns from happening at all.
The teams getting this right aren't defaulting to AI or defaulting to humans. They're asking better questions about what belongs where.
A working framework:
The mistake is treating this decision as static. Your answer shifts as you enter new markets, as brand equity accumulates, and as regulatory frameworks tighten. Even the question of where to use AI isn't something AI can answer for you.
Cultural nuance in global marketing is no longer a brand-polishing question. It's a risk-management one. AI has replaced the bottom end of localisation, and that's fine. What it hasn't replaced, and structurally can't, is the judgement that determines whether a campaign lands or damages a market you don't know well. For brands with a century of goodwill, cultural missteps are survivable. For scale-ups in their fourth or fifth market, they are not. Treat cultural expertise as insurance, not ornament and budget accordingly.
Freedman's Global Campaign Health Check identifies the structural gaps before they cost you. Start here.
AI replicates patterns it has seen in training data. Cultural nuance are the patterns that aren't stable: shifts emerging this month, regional sensitivities that became charged last quarter, dialect choices that signal authenticity in one city and condescension in the next. Models will keep improving and prompts will keep getting better. What won't close is the gap between static training data and a culture that moves every month. MIT Sloan's 2025 research confirms English-trained models default to individualistic, analytic outputs regardless of the target market. Fixing that requires retraining of the models, not reprompting.
Use transcreation when establishing brand voice in a new market, operating in a regulated sector where errors carry legal exposure, entering a high-context culture where sub-text matters more than copy, or handling sensitive topics where tone-deaf output damages trust. Use AI translation for high-volume, low-stakes work where your brand voice is already established and the cost of an error is small. Most multi-market campaigns need both. The mistake is deciding once and applying everywhere, instead of making the call deliberately for each piece of work.
Three risks matter most. First, brand-voice drift: AI models aren't consistent across languages, so your brand sounds different in every market. Second, cultural blind spots: models default to the cultural assumptions of their training language, producing subtly Western output regardless of the target. Third, legal and regulatory exposure: the Moffatt v. Air Canada ruling in 2024 established that brands are liable for AI output, and the EU AI Act adds further compliance weight. None of these risks are new. AI has made them easier to trigger at scale.