The deliverables involved in international campaign production have skyrocketed. A single 30-market campaign now generates around 9.2 million variables: every combination of market, content type, channel, format, language and approval stage.
Over Freedman’s 35 years of running international campaigns, I can tell you that the key to scaling international campaign production without losing quality is not effort, it’s structure. And the brands that hold quality at scale have redesigned their operating system to handle the volume.
For any organisation in growth mode, instilling capability at pace to deliver a high volume of quality assets across rapid market expansion is the challenge. Meta campaigns run into 1000+ assets across markets, channels and formats every year. Brand campaigns for Fitbit hit around 500 assets across markets, channels and formats. Klarna, scaling from a handful of EMEA core markets to 25+ regions in two years. EA Games: well over 10,000+ assets a year across 34 markets.
At these volume levels, quality failures are costly. In my experience, these are the three that hit hardest:
The assets run and the media spend goes out, but the work was not built for the market receiving it. Brand equity erodes unevenly across regions, campaign effectiveness drops, and the gap between what the brand means at HQ and what it means in-market widens with every campaign cycle.
Quality failures caught late mean corrections, and corrections mean delays. Campaigns that should have launched in sync across markets go out staggered, or not at all in some territories costing unused media placement, missed sales opportunities out of sync with retail campaigns, and weakened brand not maximising on compounded brand awareness.
A campaign that has not been cleared for the legal and regulatory requirements of a specific market is a liability. At best it gets pulled. At worst it attracts a fine, a public correction, or a regulatory investigation.
The root cause is always the same: lack of structure. When the operating system was not designed for the volume, things fall through the gaps. Here are some of the ways I’ve seen this showing up across global campaigns of every size.
The master brief gets built for the lead market and retrofitted for fifteen others. The assumptions baked into the original, around casting, cultural references, regulatory requirements, do not travel. By the time anyone flags it, the master is locked.
Creative gets signed off centrally before anyone has checked whether it translates, clears regulation in priority markets, or reads correctly outside its country of origin. The cost of changing it at that stage is often the reason it does not change.
When in-market cultural review happens too late in the chain, or not at all, the question of whether the work feels right in market only gets answered after the campaign runs.
It is not unusual to find a regional manager reviewing content in a language they do not read fluently, for a market they cover but do not live in. That is not a failure of the individual but a structural gap presenting itself as a people problem.
Multiple versions of assets circulate. The wrong file gets used in-market. Nobody finds out until it is live.
AI handles volume and repetition well. It does not know whether a joke lands, whether a visual reference carries the right associations, or whether a financial services claim will clear compliance. When it is asked to do that work, the gaps show up in-market.
Without a structured debrief, the same problems repeat. The evidence is there in every campaign cycle. Most organisations do not use it.
Every campaign is different. The markets, the creative, the stakeholders, the regulatory landscape, the timeline. There is no single playbook that maps cleanly onto every scenario. But there is a set of structural conditions that have to be in place for quality to hold as volume grows. The brands that scale without losing quality treat their operating system as something to be deliberately designed, not something that accumulates by default. The steps are baked into our operating system at Freedman.
Quality is largely determined by the decisions made before the brief is locked.
Does the concept travel? Before the master is signed off, test whether the creative works across your priority markets. Does it translate? Does the casting read correctly outside the country of origin? Does the tone land in markets with a different cultural register?
Does it clear regulation in the markets you are running in? Regulatory requirements vary significantly across territories. Financial services, healthcare, food and drink, alcohol: each carries specific rules that differ by market. Map the exposure before production starts, not during legal review.
What are the asset specifications for every local market, channel, and format? Adaptation cannot begin properly if the specifications are not agreed upfront. Version proliferation and incorrect assets are almost always traceable to this decision being deferred.
Is the brief built for multiple markets from the start? A brief written for one market and retrofitted for fifteen carries the assumptions of the lead market into every adaptation.
This is the structural layer: decision rights, workflows, escalation paths, file standards, approval routing. The unglamorous plumbing of getting work through 20 markets at pace without quality falling through the gaps.
In practice this means a single delivery lead with authority across the full chain. Clear approval routing across markets, so quality issues surface and get resolved before they compound. Translation memories and brand voice documentation that enforce consistency and reduce the risk of brand drift. A real-time dashboard so the central team can see where every asset is without chasing emails.
The aim is for governance to travel with the work, not wait for the work to come back to a central reviewer.
This is also where the technology question gets answered honestly. AI and automation handle volume, repetition, version control, and first-pass translation well. They do not handle cultural nuance, regulated content judgement, or hero creative. The brands that get the most from technology are the ones that are clear about where it earns its place and where it does not.
In our experience, cultural review earns its place at two points in the chain: early, when the concept is being tested for whether it travels across your markets, and again after adaptation, before the work reaches the regional approver or client for sign-off.
Translation handles vocabulary, linguistic QA handles accuracy, neither tells you whether the work actually feels right in the market, and that gap is where cultural quality failures live. That requires people who live in the market and know the brand reviewing assets at both stages. Not as a check on the linguists but as a separate read on whether the work meets the standard the market requires.
This means regional approvers see work that has already been pressure-tested against local context. Their feedback narrows from “this is not right” to “I would prefer this phrasing,” which changes both the speed and the temperature of the whole approval process, because the quality issues have already been caught.
The fix is not to ask regional managers to do more. It is to put cultural depth into the production chain itself, so the work that reaches them has already been read by someone who could catch what they cannot.
Every campaign cycle generates evidence about where quality held and where it did not. What landed well in-market, what consistently needed correcting, which markets needed earlier involvement to avoid quality issues downstream, which formats kept producing work that missed the mark.
Teams that improve year on year take that evidence seriously. A structured washup after each campaign then updating the translation memory, training AI models to catch problems, sharpening the brief template, and carrying specific quality fixes into the next planning cycle. Treat it as a planning session rather than a formality, and quality problems that showed up this cycle stop showing up in the next one.
From 2022 to 2024, Klarna expanded EMEA campaign delivery from a handful of core markets to 25+ regions. The pace of that growth meant the localisation operation supporting the campaigns had to mature quickly to keep up.
Freedman took ownership of global campaign delivery across 8 EMEA markets, working as the centrally accountable team between Klarna’s HQ marketing teams, in-house writers, regional marketers, legal stakeholders and external production partners. Local market input was integrated earlier into campaign development. Translation memories and glossaries reduced duplicated work. Brand validation and financial sector clearance ran in parallel with Klarna’s legal teams, reducing regulatory rejections. Asset production across TV, OLV, cinema and social ran end-to-end through us.
Brand voice consistency across EMEA improved. Duplication dropped. Regulatory rejections fell. Internal Klarna teams were freed up to focus on specialist product copy. As Klarna’s Production Lead at the Global Brand and Creative Hub put it: “Since working with Freedman, we’ve seen a 180 degree turnaround.”
The reason it worked is not complicated. The operating system was rebuilt to handle the volume Klarna was actually doing.
When the operating system was not designed for volume, international campaign quality slips in predictable ways. Campaigns that do not work in-market. Missed media dates. Regulatory failures. The causes are consistent too: briefs written for one market, concepts approved before they have been tested, approvals that run slower than production, cultural review that happens too late or not at all.
The brands that hold quality as they scale do four things. They front-load the decisions that determine quality before the brief is locked. They build infrastructure that moves at the speed of production. They embed cultural review at the points that matter. And they make every campaign teach them something. None of those four things is hard on its own. The work is in keeping all four in step as the volume grows.
Freedman’s Global Campaign Health Check identifies the structural gaps before they cost you campaign time and budget. Start here.
Most fail at execution because the operating system was built for a smaller scale than the campaign now demands. Local market input is not built into the brief. Approval infrastructure runs slower than production. Quality control defaults to people who do not have the linguistic or cultural depth needed for every market they cover. And there is no structured way of feeding what went wrong into the next campaign cycle. The volume scales but the operating system does not, and quality drops in the gap between the two.
Brand consistency at scale requires four things working in step. The upfront work done upfront, with cultural and regulatory input into the brief before creative is locked. Workflows and approval infrastructure built to move at the pace of production. Cultural review built into the production process itself, so assets are pressure-tested against local context before they reach the client team. And a structured washup after each campaign to feed learnings into the next one. Consistency is achieved through clearer infrastructure and unambiguous accountability at each stage, not by adding more approval layers.
Good governance means clear accountability at each stage of production, not just at sign-off. A single delivery lead with authority across global, regional and local stakeholders. Decision rights documented and understood. Approval routing that runs markets in parallel rather than sequentially. And quality failures from one campaign tracked and used to improve the next. The aim is for governance to travel with the work, not for the work to wait for governance.
A global campaign operating system needs four things working in step. Decisions about concept viability, regulatory exposure, and asset specifications made before the brief is locked. Infrastructure that moves at the speed of production: a single delivery lead, parallel approval routing, translation memories, and real-time asset tracking. Cultural review embedded at two points in the chain, when the concept is being tested for travel, and again after adaptation before client sign-off. And a structured washup after each campaign that feeds specific improvements into the next cycle.