Asset Group Split-Test Framework
When to split, when to consolidate, and when to ignore the Google rep.
Who this is for
- PMax buyers between $20K and $300K/mo
- Operators arguing with a Google rep about structure
- In-house teams formalising a build process
Why this exists
Asset group splitting is the single most-debated PMax decision. Splits that look smart on paper starve the algorithm of learning data; consolidations that feel safe trap unrelated intent in the same bucket. This framework is the deterministic answer.
Read this first
The wrong split looks smart on paper because the audiences seem distinct, but in practice each new asset group needs its own conversion volume to escape Smart Bidding's learning phase. Split too aggressively and every group is starved. Consolidate too far and unrelated intent (a $20 accessory and a $400 hero SKU) shares a bidding signal that suits neither. The decision is structural, not aesthetic, and this framework is how we make it the same way every time.
Run the split decision
Score signal density per candidate split
Signal density is conversions per asset group per 14-day rolling window. Below 30, Smart Bidding is in extended learning and a split makes the problem worse. 30-100 is workable. 100+ is where structure pays off because the algorithm can actually differentiate within an asset group. If a candidate split would drop either side below 30, don't split.
Score learning history
How long has the existing asset group been running with stable spend? Under 4 weeks means it's still learning, splitting resets the learning phase for both halves. 4-12 weeks is the productive window for splits. Over 12 weeks of stable performance is when you split for clarity, not for performance lift.
Score intent purity
Intent purity is whether the products in the asset group serve a single search-intent cluster. Mixed brand + non-brand + competitor intent in one group dilutes the audience signal and the bidding decision. If the candidate split aligns the asset group around one cluster (brand-led, accessory-led, hero-SKU-led, refill-led), purity rises and the split earns its keep.
Apply the rule of three across signals
Sum the three scores: signal density, learning history, intent purity. Each on a 0-3 scale. Total under 5 means consolidate. 5-7 means hold position. 8-9 means split, with a written hypothesis for why each new group will outperform the parent.
Five split patterns that consistently outperform consolidated asset groups
These are the structures we deploy on the engagements that compound. Patterns that work.
| Split pattern | When to deploy | Expected lift |
|---|---|---|
| Hero SKUs vs catalog tail | Hero set has 5-30 products with 3x+ the per-SKU conversion rate of the rest of the catalog. | 10-25% blended ROAS lift. Hero group bids efficiently, tail group runs at lower target. |
| Always-on vs seasonal vs clearance | Catalog has clear seasonality buckets that swap quarterly. | Stops the algorithm from chasing dying-season inventory at the wrong target. |
| Margin tier (high / mid / low) | Contribution margin per unit varies meaningfully across the catalog (over 20 percentage points). | POAS targets can be set per group, surfacing margin discipline at the asset-group level. |
| Brand vs non-brand vs accessory | Account has both branded and unbranded discovery flows in the same campaign. | Brand exclusions enforce isolation. Non-brand group bids against incremental traffic only. |
| Geo-tier (primary market vs expansion) | Multiple-country brand with one mature market and 1-3 expansion markets. | Avoids the expansion market eating budget from the proven-economics market. |
Decision flowchart in plain text
Walk this top to bottom for every candidate split. The framework returns one of three outcomes: split, hold, consolidate.
1. Signal density check
IF candidate split would drop either side below 30
conversions / 14-day window
OUTCOME = consolidate, do not split
2. Learning history check
IF existing asset group has stable spend < 4 weeks
OUTCOME = hold, revisit at week 5
3. Intent purity check
IF the new groups would each align around a single
intent cluster (hero, accessory, brand, etc.)
intent_purity = high
ELSE intent_purity = low
4. Score
density (0-3) based on conversions per 14-day window
history (0-3) based on stable-spend weeks
purity (0-3) based on intent alignment
total = density + history + purity
5. Decision
total < 5 OUTCOME = consolidate
total 5-7 OUTCOME = hold + write a hypothesis
total 8-9 OUTCOME = split, write the hypothesis
BEFORE shippingConsolidation triggers (when splitting hurt and you need to roll back)
- One side of the split has below 30 conversions in any 14-day window post-split
- Smart Bidding is showing 'limited by data' on either group three weeks after launch
- Blended ROAS dropped 10%+ post-split and stayed there past week 3
- Asset rotation is uneven: one group's assets are getting 4x+ the impressions of the other
- Search-themes drift report shows the same theme firing on both groups
- Margin lens collapses post-split: one group runs at 0.6x POAS while the other runs at 1.8x
- Audience signal density on either group is below the threshold (no customer match, no custom segment)
- Operations cost: the team can't keep up with creative refresh on both groups, so one stagnates
Why the Google rep arguments to ignore (and the two to take seriously)
Reps tend to recommend consolidation. The pitch is usually 'PMax learns better with more conversion volume in one group'. That's true at the margin but it ignores intent purity entirely. A consolidated asset group with mixed brand + accessory + hero converts well in aggregate and badly per cluster, which means the campaign is propped up by the easiest-to-acquire conversions and starves the harder ones. Ignore.
The two arguments worth taking seriously: first, 'your asset group has under 30 conversions in 14 days', which is the signal-density threshold this framework already enforces. If the rep flags that one, listen. Second, 'your asset rotation is below 80% of the recommended ad strength', which means the algorithm doesn't have enough creative to test against the audience signal and the bidding suffers regardless of structure. Listen and ship more creative; the structure work doesn't compound on a starved creative library.
Everything else (Final URL expansion suggestions, audience signal expansion to the broadest in-market segments, automatic asset suggestions for cross-listing) is rep-recommended optimisation that improves Google's reported performance, not yours. Decline politely.
What good looks like after the framework runs
Every asset group has scored 8+/9 on the split rubric or sits intentionally consolidated with a documented reason. Each group runs above the 30-conversion threshold per 14-day window. Asset rotation is balanced. Smart Bidding is out of learning on every group. The team can name the hypothesis behind every existing split and points to the data that proves it.
External resources
Authoritative references we link to alongside the template. Read them before running the audit.
- Google Ads, asset groups in Performance MaxAuthoritative spec on asset group structure and learning behaviour.
- Google Ads, Smart Bidding learning phaseReference for the 30-conversion threshold the framework enforces.
- Google Ads, ad strength inside Performance MaxWhat 'good' creative coverage looks like at the asset-group level.
- Google Skillshop, Performance Max certificationFree training. Worth running through before defending a split decision.
- Search Engine Journal, Performance Max coverageTopic index for ongoing PMax structure debates and case studies.
Want this run for you?
