π Growth Gems #129 - Creatives, Paid UA, and AI
Gems from Thomas Petit and Eric Seufert in Season 6, Episode 1: MDM Mailbag #3 on the Mobile Dev Memo Podcast
Hi, fellow growth practitioner!
This week, Iβm bringing you insights on creatives, paid UA, and AI.
I hope these insights will be valuable!
π₯ TOP GEM OF THE WEEK
Itβs pretty simple: I read, listen, or watch everything growth-related that Thomas is involved in.
Podcast episodes with both Thomas and Eric?
They instantly go up the mining queue (no offense to everyone else).
Once again, this one didnβt disappoint.
Enjoy!
SPONSORED INSIGHT & RESOURCE
The Epic v. Apple ruling allowing iOS apps in the U.S. to offer external checkout was a dream come true for many.
You can now add a web purchase button to any RevenueCat Paywall and send US iOS users to a hosted web checkout.
No code, no store fees, fully compliant.
You can even have one smart button that seamlessly routes each user to the best path (IAP, web checkout, web product selection page), without hard-coding a thing.
Iβm excited to see how developers leverage this feature for new revenue wins.
And here is my predictionπ
The way I see this play out is that the first test allows you to identify which segments are the best fit for web payments. Then, you use RevenueCatβs Attributes to push web payments to these segments.
π Ads promising in creative testing (spend traction + decent CAC) that donβt get much delivery once added in the main ad set alongside winners might:
Need space in a specific ad set (especially if content shows they target a specific/niche/different audience)
Get kept in backlog for when the big winner of the main ad set starts showing fatigue
(02:27) by Thomas
The first point helps mitigate the fact that Meta's winner-takes-all ad delivery can prevent audience expansion.
π In the case where statics don't get spend vs. videos, consider restricting placements for specific creative types (e.g., Feed for static and Reel for video).
(02:40) by Thomas
stage: growth / scaled
π Implement a two-step creative testing process:
First, validate on which ads Meta wants to spend. Use a minimal budget and a lot of creatives together (e.g., $100 with 50 creatives). Separate statics and videos.
Then, test conversion performance on the top ones with more budget (e.g., 5)
This approach prevents wasting budget on ads that won't get delivery regardless of their conversion potential.
(04:35) by Thomas
This means you focus on delivery potential over conversion metrics in the initial creative testing (or even top of funnel metrics).
Thomas uses the same optimization event for both steps: the same one as what the main campaigns use.
π Reframe creative testing as rapid loser elimination rather than winner identification. The fastest path to finding winning creatives is systematically and quickly removing the ones that don't work (i.e., not spend more on them), allowing you to focus resources on promising candidates.
(05:57) by Eric
π Trust Meta's delivery algorithms as a proxy for conversion potential. The platform's decision to show your ads is already a signal that they have conversion probability - use this as a filter before investing more in a specific ad.
(06:29) by Eric
Eric mentioned that he would even pause an ad within an hour or two in order to kill non-performers as soon as possible. And keep doing this as you look at conversions that are further down the funnel.
π The worst you can do is they force spend on every single ad. This approach wastes significant budget on ads that platforms won't naturally deliver to. Let the platform's delivery preferences guide your testing allocation.
(07:23) by Thomas
Thomas doesn't even look at the ads before launching the creative test. That way he anonymizes the ads and removes the emotional component. He only looks at them after the test.
I guess that works if you're not involved in creating them :)
Both Thomas and Eric's point is that once in the BAU (Business Ad Usual) campaigns (your scaled campaigns), you won't be able to force spend anyway.
stage: early
π Establish a minimum threshold of 10 conversion events per day for any campaign to ensure sufficient data for machine learning optimization. If you can't reach this volume, increase budget, change the conversion event, or don't run the campaign.
(11:51) by Thomas
As he puts it: "No machine learning is going to happen with two data points a day".
π ASO has limited scalability as a primary growth driver. While store optimization remains important for conversion, it cannot systematically drive large-scale growth. Focus ASO efforts on conversion optimization rather than expecting it to be a major acquisition channel.
(13:58) by Thomas
All successful large-scale apps rely on paid acquisition or other non-ASO growth methods. ASO should be viewed as a necessary conversion optimization tool rather than a primary growth strategy, with clear scaling limitations.
π Never, ever rely on view-through for Apple Ads (and don't run Today Tab ads).
(18:58) by Thomas
Thomas did share a case where looking at view-through was useful. They changed the product page. For brand (with the first result in organic), they got a lower CTR and initially thought it was a loss. But by looking at the sum of view-through + taps they realized they were actually bringing more people to the page organically (and therefore paying less).
π Demographic targeting is largely irrelevant, besides age. But even then, it's best to use a multiplier/value rule rather than fully exclude and instead:
Use creatives for targeting
Filter out the signal you're sending back to the network (e.g., Google) if you want to get less of a specific age range
(21:41) by Thomas
Thomas did mention that in consumer subscription, trial conversion is highly correlated with age.
π Within 6 months, it will be clear to everyone that AI videos is the way to produce creatives.
(27:00) by Thomas
π Producing more and faster is more important than producing cheaper. Cost savings is not the main driver behind using AI to produce creatives. People that were testing 5 ads are now testing 50, and people that were testing 50 are testing 2000.
(28:28) by Thomas
π AI can be used to form test hypotheses and craft differentiated creative concepts, not just for the actual production. There might even be an advantage in not always producing via AI (because AI-produced videos might not standout).
(30:20) by Eric
π There is a lot of potential for AI for other things than creative production:
Predicting creative winners without testing
Analyzing performance/product data (MMP, product analytics, etc.) to identify blindspots and opportunities
(32:15) by Thomas
I've been surprised not to have seen a "Ask Amplitude AI" thing yet or something to spot patterns and opportunities in data. Turns out, they've been working on AI Agents (beta only for now).
I like something else Eric mentioned "the real value of AI will be for the stuff you don't see".
stage: growth / scaled
π It's not uncommon for non-gaming and subscription apps to have 20-25% of their budget going to AppLovin. But there are opportunities beyond AppLovin on βgamingβ ad networks.
(39:56) by Thomas
You want to assess if it's more efficient to go after a new channel vs. spending on where you're already advertising. Thomas referred to this article from MobileDevMemo: Opportunity cost and diminishing returns in user acquisition.
Here are also a couple of LinkedIn posts on the topic:
By Shamanth Rao (with a focus on early stage)
By Dapne Tideman (a framework to decide)
By Eric Seufert (on understanding ROAS curve differences)
See you next time.
Stay curious!
β Sylvain
Chief Insights Miner at Growth Gems βοΈ
(Fractional) Head of Growth at Reading.com
Growth Consultant/Advisor for high-potential subscription apps
π Source:




