Tags:
EventsTwelve senior marketers. One table at the National Gallery. No slides.
In March 2026, we brought together VP, Director, and CMO-level marketers from brands including Monzo, Meta, Twitch, Experian and more, for a conversation about what AI in global marketing is actually doing to their work. Not the headlines or hype. What they are dealing with day to day.
The session ran under Chatham House Rules. Nothing is attributed. But the tensions are real.
Five themes emerged. If you are running international campaigns right now, most of them will land.
If you are interested in joining our next roundtable, you can register here.
Theme 1: AI is generating the content. AI is consuming it. So who is it actually for?
One of the most striking moments in the room came from someone running marketing at a major consumer brand. Their team has trained an AI agent on ten years of brand content, campaigns, and tone of voice guidelines. Most content is now AI-generated first, with writers and editors refining it afterwards.
The uncomfortable observation: that content is being produced primarily to show up in agentic search. Which means AI is generating content for AI to surface to humans who may or may not ever actually read it.
The question they asked out loud, and nobody had a clean answer to, was: who is the audience? If AI generates the content and AI agents are what discovers and serves it, at what point does the human disappear from the loop entirely?
It was not a rhetorical point. It has direct implications for how brands justify the investment in content at scale. Volume has always been rationalised by reach. But if the primary consumer of that content is an algorithm rather than a person, the entire logic of content marketing starts to shift underneath your feet.
This connects to a broader observation from the room: we are already in a world where AI is writing for AI, and the metrics we use to measure success were built for a different world entirely.
Theme 2: Being transparent about AI can backfire. And brands that stay quiet are outperforming.
The room heard something from the advertising industry that runs directly against received wisdom on AI and consumer trust.
Brands that have been as transparent as possible about their use of AI, labelling it, flagging it, being upfront with consumers, have in several documented cases faced immediate backlash. The mere mention of AI in a campaign context is enough to trigger negative consumer response, even when the work itself is strong.
Meanwhile, brands that have used AI without labelling it, staying true to their brand values and consumer expectations rather than announcing their process, have seen it lead to genuinely effective campaigns.
The implication is uncomfortable: transparency about AI may actually reduce consumer trust rather than build it in certain contexts. Saying nothing and doing it well outperforms saying something and doing it well.
This creates a real strategic dilemma. The regulatory direction is toward more labelling, not less. UK and EU consultations on AI content disclosure are already underway. But the consumer data, at least right now, suggests that labelling can be a liability. The brands working through that tension now, before the frameworks are mandated, will be better placed than those who wait.
Theme 3: The default question has flipped. And it is making people uncomfortable.
A moment came up that resonated across the room. In more and more organisations, the question is no longer what should we use AI for. It is: why are you not using AI for this? Human involvement now has to be justified. That is a significant cultural shift, and an unsettling one, because the tools are not ready for everything yet.
The problem is what that framing does to the conversation, putting human judgement on trial rather than keeping it in the room. People start defending instincts they should not have to defend. AI gets applied in places where it is genuinely not the right tool, not because anyone decided it was, but because nobody felt they could push back.
The reframe that changed the dynamic came from someone running campaigns across 27 countries. When asked to justify a human-led workstream, their response was to flip the question: stop asking why AI cannot do it. Ask how far AI can take you, and then identify the specific point where human judgement has to come back in.
That question, how far can AI take us, is a better starting point. It keeps human expertise in the room as a deliberate choice rather than an obstacle. It creates space for honest answers about where AI adds genuine value and where it does not. The teams doing this well are not defaulting to AI or defaulting to humans. They are asking the better question, and building the confidence to answer it honestly.
Theme 4: The operating model has not caught up. That is where the risk lives.
Every organisation represented in the room has adopted AI tools. Not one has fundamentally changed how they work around them.
The bottleneck used to be budgets, skills, and leadership buy-in. Those have largely been solved. The constraint has moved inside the operating model itself, into governance, decision rights, and the question of who is actually checking what goes out the door.
Someone described it well: new tools. Old workflows. Faster production. Same quality controls. They called it "the messy middle." Everyone recognised it immediately.
The problem compounds in global organisations. Local teams and agencies are already using AI independently because the tools are accessible and the pressure is real. But when they do that without shared standards or central visibility, brand consistency erodes quietly. Not through one visible failure, but through hundreds of small compromises across markets. By the time it shows up, it is already everywhere.
AI does not create complexity. It amplifies whatever system sits underneath it. If the system is solid, AI makes it more effective. If the system has gaps, AI makes those gaps bigger, faster, and harder to spot. Governance is not a brake on AI adoption. It is what makes AI adoption safe to scale.
Theme 5: Everyone is producing more. Nobody is sure it is better.
One person in the room put it plainly: running a single Meta campaign across multiple markets means roughly a hundred assets a week to feed the algorithm. Multiply that by products, audiences, and countries. The volume is not a creative choice. It is a tax.
AI has made that volume achievable. More assets, more campaigns, more content, faster and at lower cost. Nobody in the room was dismissing that.
But a more uncomfortable question surfaced when people looked back honestly at their output: how much of this is actually useful? The productivity is real. The signal-to-noise ratio has not kept pace.
The skill most at risk is curation. Anyone can generate. Not everyone can decide what should exist, what serves the audience, and what is just filling space. As someone put it: we are in a sweet shop grabbing everything, without working out which sweets we need.
The teams navigating this well are not the ones producing the most. They are the ones being precise about what actually needs to be made, and what does not. That discipline is harder than it sounds when every platform is rewarding volume and every leadership team is asking why AI cannot just do more.
Key takeaway
AI in global marketing is not a technology problem. It is an organisational one. The brands that will perform best internationally are not the ones adopting AI fastest. They are the ones enabling global campaign delivery by building the governance frameworks, operating standards, and human oversight structures that make AI work properly at scale. Volume without those foundations does not produce better campaigns. It produces more of the same, faster, with the errors harder to spot.