BIdet Australia

4.2x
Blended ROAS
Lower
Wasted Spend
Clearer
Budget Allocation
6 Months
Monthly Stack Cost
Confidentiality Note
Due to client confidentiality, we are not sharing ad account screenshots or revenue-level platform data for this project. However, the setup, attribution problem, operating changes, and performance shift shared below reflect the actual work and outcomes.
Overview
A lot of paid accounts do not look broken at first.
That is exactly why they become dangerous.
The dashboards look healthy enough to keep spending. Meta shows conversions. Google shows conversions. Retargeting appears efficient. Brand search looks strong. The reported ROAS is not bad, so the team keeps moving forward. And yet, beneath all that, something still feels off. Scaling decisions feel shakier than they should. The account looks profitable on paper, but the confidence behind those numbers is thinner than anyone wants to admit.
That was the real situation here.
This project was for a leading Australia-based premium bidet DTC brand. The business was around 18 months old, spending roughly $5,000 to $7,000 per month across Meta Ads and Google Ads, and reporting a blended ROAS of 2.21x. On the surface, that does not sound terrible. It sounds like the kind of account many founders would keep feeding, hoping the next round of creative tests or budget changes would unlock the next stage of growth.
But that was not the actual problem.
The real issue was that the account was being judged through a reporting view that was flattering in some places, misleading in others, and not clean enough to make serious scaling decisions with confidence.
The Challenge
The challenge here was not dramatic underperformance. It was distorted clarity.
And honestly, that is harder to catch.
If an account is obviously failing, everyone sees it. Spend is wasted, sales are weak, and the team knows something major has to change. But when an account is sitting in the “fine enough” zone, it often gets protected by its own dashboards. People hesitate to question the structure because the numbers do not look broken enough to trigger urgency.
That was the trap.
The brand was already scaling. Both platforms appeared to be contributing. Reported returns made the account look workable. But the deeper question was not, “Are we getting results?” It was, “Are we reading these results correctly enough to scale the right things?”
That is a very different question.
Because if the answer is no, then the business starts rewarding the wrong campaigns, overtrusting the wrong metrics, and slowly building a paid growth system around a version of reality that is incomplete.
That is expensive.
What We Found
Once we got into the account properly, it became clear that this was not really an ad problem first. It was a credit-assignment problem hiding inside the ad account.
There were six quiet issues shaping the whole system.
The first was classic last-click bias. Both Meta and Google were often claiming more credit than they truly deserved, and that made the final touchpoint look more important than the actual journey.
The second was brand search inflation. Users were often seeing the brand through Meta first, then later searching the brand name on Google before converting. That made Google’s reported performance look cleaner and stronger than it really was.
The third was retargeting inflation. Retargeting looked like a hero because it was catching already-warm demand, not because it was necessarily creating fresh demand.
The fourth was that each platform was being treated too much like its own source of truth. Meta had its story. Google had its story. Both looked logical in-platform, but neither should have been the final basis for budget allocation.
The fifth issue was that the funnel stages were not separated clearly enough. Prospecting, retargeting, brand, and direct were blending into one messy performance view. Once that happens, it becomes much harder to understand what is truly driving growth and what is simply collecting credit at the end.
And the sixth issue was the most dangerous one: awareness-driving campaigns were at risk of being undervalued. Campaigns doing the hard work at the top of the funnel can look weaker than they really are when the reporting model is distorted. Once that happens, teams start pausing or reducing the exact campaigns feeding the rest of the system.
That is how scaling quietly breaks.
What We Changed
1. We changed the reporting source of truth
The first step was shifting the account to a cleaner last-non-brand-click view.
That mattered because it reduced the artificial inflation caused by brand search and made it easier to see which paid activity was actually helping create demand rather than just catching it at the end. We did not need a prettier dashboard. We needed a more reliable operating lens.
2. We separated the funnel properly
Next, we stopped treating the account like one blended machine and separated it into the stages that actually mattered:
- Prospecting
- Retargeting
- Brand
- Direct
That immediately changed the quality of the conversation.
Instead of asking vague questions like “Which campaign is working best?”, the team could start asking more useful questions like “Which stage is doing which job, and where is the real contribution happening?”
That is where paid growth starts becoming more intelligent.
3. We introduced a more useful decision model
We did not go down the path of an overbuilt multi-touch framework.
Instead, we used simple weighted logic that gave more credit to the first paid touchpoint within a 90-day window, and we shifted decision-making toward incremental contribution rather than flattering platform-reported ROAS.
That sounds technical, but the actual impact was practical.
The team stopped treating Meta’s and Google’s native ROAS columns as the final basis for scaling decisions. Those numbers were still observed, but they were no longer the authority. The account started being managed based on what was actually moving the system when campaigns were increased, reduced, or turned off, not just what each platform wanted to claim.
4. We refreshed creatives, but kept the real focus where it belonged
Yes, creatives were refreshed as part of the project.
But they were not the main hero of the story.
We aligned creative angles with the now stage-separated funnel, so prospecting and retargeting were no longer being asked to do the same job with the same messaging. That helped. It made the account healthier. But the real unlock still came from attribution clarity, because even strong creative gets misjudged when the reporting model itself is lying.
The Results
Within 90 days, the blended ROAS moved from 2.21x to 4.2x.
That was the headline shift.
But the more meaningful result was what happened beneath that number.
Wasted spend on low-quality retargeting dropped. Budget allocation became more obvious and much more confident. The campaigns driving real top-of-funnel contribution became easier to identify and protect. And the entire account stopped being governed by whichever platform told the most flattering story.
That changed the quality of scaling.
The team could finally see which campaigns were genuinely driving profitable growth and which ones were simply showing up at the end to collect credit. And once that became clearer, the account became far easier to manage intelligently.
That is the kind of result that matters.
Because a better number is useful. But a better decision-making system is what gives that number a chance to keep holding up.
Why It Worked
This worked because we did not chase prettier reporting. We chased cleaner truth.
That is the real difference.
The uplift did not come from discovering some hidden trick inside the ad platforms. It came from finally seeing the account honestly enough to stop funding the wrong winners and start backing the campaigns that were actually creating demand.
Once that happened, everything improved in a more rational way.
Budgeting improved because spend could move with more confidence. Campaign prioritization improved because the team could see which stages were doing real work. Creative became easier to evaluate because it was no longer being judged through a distorted funnel view. And scaling became more stable because the business was no longer relying on inflated signals to decide where to lean harder.
At PreCrux, this is how we think about paid growth. Not as a game of platform screenshots and vanity ROAS, but as a decision system. If that decision system is distorted, even decent campaigns get mismanaged. If that decision system becomes cleaner, the whole account starts making more sense.
Honest Limitation
It is important to say this clearly: attribution clarity did not magically create the uplift by itself.
It removed blind spots.
That is what it did.
The result still depended on the account having real potential in the first place. It depended on the offer being strong enough, the funnel being healthy enough, conversion tracking being reliable enough, and the team being disciplined enough to act on the cleaner view once it was available.
So no, the takeaway is not that every brand can copy one attribution model and jump to 4.2x ROAS.
The real takeaway is much more useful than that.
When reporting becomes honest, everything else becomes easier to improve intelligently.
Final Takeaway
Most paid campaigns do not fail because the ads are terrible.
They fail because the team is optimizing against a version of reality that is incomplete, inflated, or strategically misleading.
That was the real problem here, and once the attribution model became cleaner, the path forward became much clearer too. The business could finally distinguish between what was actually driving profitable growth and what was simply taking credit for it at the end.
That is where serious scaling starts.
If your Meta and Google numbers look profitable, but the account still feels harder to scale than it should, the issue may not be the ads alone. It may be the reporting model shaping every decision underneath them.
That is exactly the kind of growth clarity we help build at PreCrux.





