top of page

“How We Increased Ad Profitability 100% Year Over Year” - Diogo Castro, Performance Marketing Manager at Dogo

Updated: 5 days ago


From predicting LTV within days to running thousands of structured creative tests, ahead of his appearance at Appsforum Lisbon, Diogo Castro, Performance Marketing Manager at Dogo, breaks down the practical frameworks behind doubling ad profitability year-over-year - and why modern growth is as much about operational rigor as media buying.


You scaled paid acquisition across Meta, Google, TikTok, and Apple Ads. How do you decide budget allocation today as platform performance and attribution reliability diverge?


Budget allocation starts with having a realistic view of ROAS and how much you can trust attribution on each channel. In some cases, platform-reported ROAS is still useful. In others, especially on iOS, I rely more on blended data and incrementality testing to understand true impact along with probabilistic attribution. Also, not every platform is meant to deliver the same efficiency. Some channels drive discovery and demand, while others, like Apple Ads, capture high-intent users and naturally show stronger ROAS.


I set clear baselines per channel/market based on historical performance or benchmarks for newer channels, then build realistic monthly forecasts around what’s achievable. From there, budget allocation is dynamic. I scale spend where marginal ROAS or CPA remains strong and pull back when performance deteriorates. I also reserve budget for testing, accepting short-term inefficiency when the learning potential is high. Over time, those insights reduce uncertainty and allow me to allocate budgets more confidently.


You achieved a +100% year-over-year increase in ad profitability. What were the specific decision frameworks or metrics that mattered most in driving that outcome?


The biggest driver was building early and reliable profitability signals. We focused on predicting LTV from early lifecycle data so we could make confident decisions quickly. For a subscription app like Dogo, signals like “trials started” combined with recent conversion and retention data allowed us to predict long-term LTV within days of acquisition. That predicted LTV became the north-star metric for all performance decisions.


With that in place, we used very clear decision rules. Anything beating profit targets was scaled aggressively, and anything consistently underperforming was paused just as quickly. This applied at both the campaign and market level.


We structure creative testing around concepts, not individual ads. Each test focuses on one clear idea and is supported by 4–5 variations, which lets us validate or invalidate concepts quickly while accounting for creative variance. We use a clear taxonomy for the assets (hook, format, length, language, audio, visuals, colors, and tactics)

Testing was another major lever. We continuously tested new campaign structures, markets, and especially creatives on Meta and TikTok. Strong patterns emerged quickly, and once we saw repeatable winners, we scaled them hard. Finally, we treated profitability as a full-funnel problem. Paid acquisition was tightly aligned with ASO, CPPs, paywalls, pricing, and CRM. Iterating across the entire funnel with a shared profitability view compounded over time and ultimately led to an increase in ad profitability.


With over 3,000 creative tests run, how do you structure creative experimentation to balance speed with statistical confidence in app marketing?


We structure creative testing around concepts, not individual ads. Each test focuses on one clear idea and is supported by 4–5 variations, which lets us validate or invalidate concepts quickly while accounting for creative variance. We use a clear taxonomy for the assets (hook, format, length, language, audio, visuals, colors, and tactics) so we can analyze results across thousands of tests and spot repeatable patterns at scale. To balance speed with confidence, we lean on early indicators that reliably predict downstream performance.


For trial subscription apps, trials and cost per trial are the primary signals, supported by metrics like CPI, CTR, and hook rate. At scale, we don’t wait for full statistical significance. Sometimes, a few trials combined with strong engagement are usually enough to classify a concept as a likely winner or loser. Winning concepts are quickly pushed into main campaigns for scale, where results either confirm or disprove the initial signal. From there, we systematically iterate on what works.


You reduced cost per winning creative by 90%. What operational or process changes enabled that level of efficiency?


The biggest change was moving from isolated ad production to a structured concept-and-variant approach. Instead of testing many unrelated creatives, we focused on fewer, well-defined concepts with multiple variations, which significantly increased the hit rate and reduced the cost per winning creative. A clear creative taxonomy was key. Every asset was tagged across attributes like hook, format, visuals, colors, length, audio, creator, and so on. This allowed us to analyze results at scale and understand why concepts worked or failed. That made it easier to spot repeatable patterns.


For example, we quickly identified which creator and dog-breed combinations performed best and then iterated aggressively on those patterns. We also improved efficiency by validating concepts in the cheapest formats first. Ideas were tested with statics, then scaled into video, and only moved into higher-cost formats like UGC once performance was proven. AI-supported image and copy generation further reduced production costs and increased testing speed.


How do you evaluate when a creative is a short-term performance spike versus a scalable, durable growth asset?


Early on, it’s genuinely hard to tell. Short-term spikes and durable creatives often look the same at launch. The difference usually shows up as spending increases. Scalable creatives tend to hold efficiency as budgets grow, while spike winners become volatile, breaking down as CPMs rise, audiences saturate, or creative fatigue sets in.


One of the strongest durability signals we’ve seen is alignment with real user pain points and core product features. Creatives that clearly frame a problem, show how the product solves it, and set accurate expectations tend to scale more sustainably.


In contrast, ads driven by trends or loosely grounded messaging often deliver quick wins but struggle to last. That said, spike winners still matter. Any creative that unlocks efficient incremental scale has value, and many short-term performers can be turned into longer-term assets by refining the message and anchoring it more firmly in product value.


Given increasing privacy constraints, how has your approach to tracking, attribution, and optimization evolved in practice - not theory?


In practice, our approach has become much simpler. Platform-side probabilistic modeling has improved significantly, Meta’s AEM is a good example, and SKAN 4.0 was a major step forward. Multiple postbacks and lower privacy thresholds have brought back more usable signals and made iOS optimization far more reliable than it was a few years ago. At the same time, our mindset around attribution has shifted. Instead of chasing perfect user-level accuracy, we combine multiple signals: platform-reported performance, SKAN data, blended business metrics, geo-level analysis, and selective incrementality tests.


Together, these inputs are far more useful in practice than relying too heavily on any single framework. Custom Product Pages also help, both by improving conversion rates and by allowing some level of tracking on the App Store.


As a result, our optimization now focuses less on targeting mechanics and more on what we can directly control: creative quality, product messaging, and overall funnel performance.


How do you collaborate with product, ASO, and retention teams to ensure paid acquisition actually improves lifetime value rather than just installs?


Everything starts with a shared focus on lifetime value, not installs.


On the paid side, we optimized toward predicted LTV, which helped align everyone around the same goal: bringing in users who would actually engage, retain, and monetize over time.


Having one common metric made collaboration easier, because decisions were judged on real outcomes, not short-term volume. With ASO, we aligned store messaging and visuals with the ad angles that were already performing well, and built Custom Product Pages around those themes by market. That improved conversion rates and set clearer expectations before install. Product and retention identified which features and user actions were most closely linked to strong retention.


We then highlighted those features in ads to attract users more likely to engage in the same way. Pricing and paywall strategy were also aligned by market, based on the most-recent performance data. So acquisition, conversion, and retention worked as one connected system rather than separate silos.


Finally, what are you most looking forward to on the Marketing and Acquisition panel at Appsforum Lisbon?


I’m looking forward to comparing notes on what’s working, what’s breaking, and what everyone is testing next. It’s a great chance to exchange real, hands-on learnings with people who are actively building and scaling today. User acquisition is moving fast, and hearing how others are adapting in real time is always valuable. I’m sure I’ll walk away with plenty of new ideas and tests to run.






Comments


bottom of page