
October 28, 2025
Creative testing is one of the most misunderstood parts of Meta advertising.
Everyone knows they should be testing, but few do it in a way that produces results they can trust. The difference between scaling profitably and wasting budget often comes down to whether your structure creates clarity, or just noise. What is the best way to handle creative testing on Meta to get meaningful results?
A key principle most brands miss: every ad set is a theory. You have to know why you are testing the ads within it. From there, execute 3-6 variations to further “cover” the theory by giving meta options to spend on.
That “theory” or “angle” can take many forms:
Without a clear hypothesis, you’re just throwing assets into the machine and hoping Meta tells you the answer. With a clear angle, every dollar spent contributes to insights, even if the ads “lose.”
Deciding your testing structure isn’t about testing more or spending less. It’s using the right account structure for your level of conviction and budget.
Use this when you’re heavily convicted in an offer.
The criteria above usually means you’ve generated proven winners in a scaling campaign (or have historical data on a similar, winning offer) and can afford to give each ad set meaningful daily spend (roughly 1-2x target CPA or AOV per day. ABO is more management-heavy, but gives precise signals and the upside to aggressively scale.
Ways to Scale: Winners can be scaled directly in their original ad set by increasing the daily budget, and can be duplicated into a scaling campaign while you simultaneously increase the original test budgets.
Use this when conviction or budget isn’t there yet.
If you don’t have multiple winners or can’t allocate 1-2x CPA/AOV daily to each ad set, CBO/ASC is the better option. Meta’s algorithm distributes spend efficiently across all ad sets, and minimum spend can be set at the ad set level to prevent new ads from being buried by historical ones.
Ways to Scale:
From there, the goal should be increasing scaling budgets until they reach about 80% of spend (roughly 4x higher than testing). Then scale both in tandem each day that performance allows, holding as close as you can to an 80/20 spend distribution.
Even the best structure falls apart without discipline. “Kill rules” take emotion out of the process and protect efficiency.
Start each day sorting all active ads in your testing campaign by amount spent, and make decisions based on the last 7 days and/or “Maximum” lookback if you are aggressively testing a high volume of concepts.
Testing doesn’t end with the decision to scale or cut, the learnings matter just as much.
Additionally, each losing creative should be studied and remembered to avoid the same pitfalls in the future. However, not every ‘loss’ will be the same for every campaign.
Iterations keep performance steady; net-new keeps the creative pipeline fresh with new angles, personas, and formats.
The best way to test creatives on Meta is to treat it as a system. Every ad set should start with a clear hypothesis. Use ABO when you’re convicted in an offer and can give each test a meaningful budget. Use CBO/ASC when you’re earlier in the journey or need the algorithm to allocate spend on an over-abundance of creatives available. Enforce kill rules, document results, and keep a disciplined balance of iterations and net-new concepts.
Do this consistently, and testing becomes a system that shows you exactly what to scale, what to cut, and how to grow your daily budgets profitably.
Most brands struggle with creative testing, but the ones who master it treat every ad set as a theory to be proven or disproven.
Creative Testing is one of the main levers our agency focuses on with clients, and we’d be happy to help you hone in your strategy.
See all