Something that is often overlooked in the world of attribution is model validation. It can be hard for organizations to validate their attribution model because there is often no “ground truth” to compare your model to. For example, the organization’s data doesn’t have any way to say: “Yes, display deserves credit for 30% of the conversions, and since your model shows that display is getting credit for 29% of the conversions, that’s a pretty good model.” We don’t know the true credit a channel deserves, and this is why model validation in attribution can be really tricky.
That said, I still believe that for an attribution model to be trustworthy and for marketers to be able to pull out actionable insights from it, it is imperative that we provide some measure of model validity. If there isn’t any accountability, the attribution model will go unused, and the high cost of its creation may go to waste.
Here are some ways to assess how accurate your attribution model is:
Before selecting an attribution vendor, it is a good idea to ask for case studies and any other proof points that their attribution model provides impactful results. This will help you to get an idea of how accurate and helpful the model is.
Things to look for: have past clients been able to implement the results from their model? If so, what kind of lift in ROI did they see? Hopefully, overall revenue increased after implementing these results (and did not only increase for some channels, and decrease for others due to a shift in media budget). In order for an attribution model to be effective, it has to shift spend in a way that increases the ROI overall.
For more information on assessing attribution vendors, check out my previous post on comparing different attribution solutions and providing questions to ask when selecting one for your organization’s unique needs.
Another way to determine whether an attribution provider can demonstrate model accuracy is whether or not they use a methodology that allows them to assess the model fit. While not all algorithms will provide this, if they do, it allows for measurement which shows how well the model used fits the data.
Examples of this are R2, root mean squared error, and mean absolute percentage error, to name just a few. The error metric that is used to measure model fit may depend on the algorithm used and the attribution question itself.
That said, measures of model fit accuracy only show how well a model performs on the data you have now. Further, it won’t give you an idea of whether this is the model that, if you implement its recommendations and use it to allocate your media budget, will provide you the most lift. Will the model fit future data well, and will the model’s recommendations lead to increased ROI? That brings us to our final method of assessment…
In my opinion, this is the gold standard when it comes to assessing an attribution model. It is similar to what I outlined previously in my first point (where you have a proof of concept of how much increased ROI the model led to), but instead of being on other’s data, it shows that the attribution model led to high impact for your data.
Lift analysis and testing requires a high degree of trust in the model. You have to look at the recommendations that the attribution model provided, implement those recommendations to the best of your ability, and measure the revenue before and after implementing those recommendations. Controlling for seasonality and other changes, such as a changes to your website, is also a good example of something to consider when coming up with a solid test. Then, when comparing the ROI before and after implementing the model’s recommendations, you will have a measure of how valid the attribution model really is.
Sound risky? Sure. But after spending so many resources to get results from your attribution model, you’d better trust them enough to test if they actually work. That way, if they don’t, you can refit the model until it is valid. This means working towards increasing your ROI with your attribution model, which should have been the goal in the first place.
Each of the methods outlined in this post can be done in unison or independently. I highly recommend using the third option; doing a lift analysis and testing your model. This method will provide the most accurate measure of model performance. Just remember, no matter which method you use, you should always question the accuracy of your model and try to get some idea of its accuracy.
If you’d like to learn more about attribution, you should join our upcoming webinar on Tuesday, April 4th at 1 p.m. ET / 10 a.m. PT., titled “Content Attribution: Identifying content that converts”. In this webinar, you’ll learn about how content attribution helps marketers identify and double-down on the best performing assets to generate a lift in conversion rates. You will hear about how Cardinal Path helped Intel to influence content in real-time for unparalleled conversion. Click here to register!
As consumers become increasingly digitally savvy, and more and more brand touchpoints take place online,…
Marketers are on a constant journey to optimize the efficiency of paid search advertising. In…
Unassigned traffic in Google Analytics 4 (GA4) can be frustrating for data analysts to deal…
This website uses cookies.