💎 17 Gems from "Past, Present & Future of SKAN"

Hi there,

Today's gems are mined ⛏️  from the first Level UP UA podcast episode with Adam Smart (Director of Product at AppsFlyer) and Piyush Mishra (Lead Growth Marketing at Product Madness).

They go in-depth on SKAN with David Phillippson (Founder & CEO at DataSeat) and Tim Koschella (Founder & CEO at Kayzen).

This is my favorite recent discussion on the topic, with an interesting look into DSPs.

Past, Present & Future of SKAN

💎 #1

On Android, Google allows a parameter to be pulled from the app store called RefTag/ReferID. It’s a simple way of attributing deterministically without a device ID, that doesn’t allow you to create profiles of users (or to collect long-term data on users).


💎 #2

What’s puzzling is that Apple has invented a completely new, extremely complex and yet still immature system to manage attribution that is off all the standards from the industry.


💎 #3

There are different privacy thresholds: 
- The privacy thresholds that can prevent from knowing which publisher served the ad in the postback
- The privacy thresholds associated to the conversion values that the advertisers have set
For any app network, knowing which publisher drove the install is the biggest variable. But the way the privacy thresholds have been designed (probably to affect Google/Facebook), they are far too high.


💎 #4

DataSeat believes that the privacy thresholds are 10 installs or above, per publisher, per region (Apple’s region: Europe, North America, South America, APAC), per campaign ID, with a 24h rollout period. Above that, the installs from that publisher will start being populated.


💎 #5

The way the thresholds work creates an incentive to use less campaign IDs. It also skews things towards bigger publishers/networks like Facebook, Google, Snapchat and TikTok which is ironic because SKAN was probably designed to hurt these.


💎 #


Ad networks were not transparent with app bundles because they were worried that advertisers could buy traffic directly from the source apps. For some, also because they balance high-quality traffic with low quality traffic.


💎 #


SRNs certainly don’t want to report what is click-through and what is view-through. The biggest example is Youtube: it drives huge performance but it also skews towards views whereas the rest of the industry is judged on clicks. This is something advertisers should now be able to see if they receive the postbacks.


💎 #


Product teams might need to build games/apps with ATT in mind in order to get the right signals and be able to acquire users.


💎 #


There are 2 main variables to think about when you design the ML algorithm of a DSP
Time delay of the event, which is always a tradeoff between receiving an event early and its quality
Frequency at which the event happens, and even without taking SKAN into account some frequencies are so low that you can’t really optimize for that
In the end it comes down to the scale a lot.


💎 #


No ad network in the industry is really there yet in terms of optimization, so you need to simplify. For now, optimize for 1. Earlier events 2. More frequently occurring events.


💎 #


Before SKAN, when Facebook was optimizing for AEO or VO, a lot of it is not based on data from the actual campaign but from pre-existing data sets that Facebook connects back to your campaign goals (e.g. knowing that a user has paid in another game). This is going away with SKAN so you won’t be able to feed that to the ML anymore.


💎 #


The impact depends on what kind of monetization model you have:
- Ad-monetized (e.g. hypercasual game): delta between high-value and low-value users is relatively low so you “just” need to recalibrate to the fact that you’re getting lower CPMs (but you’ll probably have lower CPIs too). 
- IAP-monetized (e,g, casino game at the extreme): you rely on high-paying users so the impact on both monetization and UA is much bigger.


💎 #


You’ll have to rely on a mix of data to maximize the outcome of the new paradigm:
1. Maximize ATT opt-in so you can have deterministic data to work with (e.g. 20/30/40%) and extrapolate.
2. Work closely with your MMP to make sure that SKAD data is gathered for all the networks, including the SRNs (it’s worth much less if Facebook and Google are not included)
3. Have sophisticated pLTV buckets instead of just binary events


💎 #


If you want to understand the actual impact of ATT on your business, you need to measure opt-in by the overall number of users you actually get opt-in for (not just who has seen the prompt) because you can’t track all the other ones (including the ones opted-out by default). If you want to understand users’ reactions to privacy measures then you should measure how many users confronted with the prompt have made the conscious choice to opt-out or opt-in.


💎 #


If you look at ATT opt-in data based on ad requests, it’s yet another data set: you can’t track at the user level so you measure at the aggregate level how many users have opted-in. But maybe users that opt-in are under/over represented in terms of ad frequency.


💎 #


UA managers now have much more complexity to deal with. They have to make qualitative judgement because a lot of the data they have is uncertain: what data set do you trust more, what is the underlying source, etc. It’s become more similar to what an investment analyst does.


💎 #


With view through and some level of multi-touch attribution in SKAN, there isn’t much evolving SKAN can do besides allowing real-time postbacks and lowering privacy thresholds...And that’s not going to happen because Apple is not trying to compete on attribution: they’re happy with a limited attribution that helps their advertising products.