How to use machine learning to analyze attribution windows with Esa Tiusanen

In this episode, Esa Tiusanen, Senior Consultant at Columbia Road, shows you how to use machine learning to analyze your attribution windows.

You'll learn

  • How to use machine learning to improve your attribution models
  • How to evaluate different attribution models
  • How to analyze your models’ performance

Subscribe to the Marketing Intelligence Show

Learn from Supermetrics' experts how to use data to fuel growth and maximize the ROI of your marketing spend.

Anna Shutko:

Hello Esa, and welcome to the show.

Esa Tiusanen:

Anna, thanks for having me.

Anna Shutko:

It’s awesome to have you here, and today we’re going to be talking about attribution windows. So to start off, could you please tell us what is an attribution window just so we’re on the same page and the audience understands your definition of an attribution window better.

Esa Tiusanen:

Yes. Sure thing. So, an attribution window is basically the time period during which a specific conversion or an action on a website is said to be caused by a specific channel or a campaign that the user is interacting with. So, essentially, it’s the time period during which we can say that you clicking on this specific ad actually caused you to buy our product.

Anna Shutko:

I really like this definition and I also really like how you pointed out the fact that it’s meant to assign value. So, now we’re going to go to the very topic of this episode, which is your attribution windows analysis project. Could you please tell us more about the project, the challenge you had, and what were the goals you were trying to achieve there?

Esa Tiusanen:

Yes. So, the basic challenge that I was trying to solve was trying to define the proper value that we are getting from each of the campaigns or groups of campaigns that the client was running. So, they had really complicated, complex campaign structures. They had lots and lots of different conversion points that, or to an extent competing against each other and we’re basically seeing how different campaigns are pulling away conversions from each other. So, It works the same two large amounts of cross-selling that we really shouldn’t be expecting like the campaigns not actually driving sales that they’re intending to sell, but they’re selling the completely other product that it shouldn’t be selling.

So, we did some analysis on that and we essentially found that the more ad impressions and reach where we’re actually getting further different campaigns, the more of an issue of the misattributed cross-selling there was. So, essentially, just because we were getting ad campaigns that showed to lots of people, they were getting credit for lots and lots of sales that they probably weren’t the main drivers behind. So, because of the complex campaign structure, we can’t actually tell which campaign was selling how well. So, we wanted to validate our numbers, get an idea of how well each campaign or group of campaigns is actually performing and be able to optimize our campaigns based on that.

Anna Shutko:

Yes. That definitely did sound like a very complicated task, and I was actually surprised to hear that there were campaigns that were pulling conversions from each other. I think that’s the first time I’m actually hearing a case like this. So, within this project, how was the data flow structured? Maybe you could try to simplify it and explain how you tried to solve this. Another question here would be, what were the tools you used to create reports and overall reduce the complexity of the challenge you were trying to solve?

Esa Tiusanen:

Yes. So, just to start off with, we run several different kinds of analysis on how we would actually attribute them based on the raw data that we actually have and we tended to come up a little bit short. So, Different ways of analyzing the actual results came out inconclusive or they were actually pulling the other way, depending on whether we’re looking at Facebook or Google analytics, multi-channel funnel data. So, we have mixing and varying data in different sources and they were coming out slightly different and they didn’t really line up. So, we needed to figure out another way that we could actually understand how we’d get to a more official truth for this question. The way we ended up trying to do it was essential to create a model taking into account all the different ads that were running in different channels bringing in data from them.

I’m trying to predict the number of actual sales that we will be getting for each day and then dropping off single campaigns or different groups of campaigns so that we can essentially estimate what the model would predict us to have if we’re not running a specific campaign on a specific platform. Then comparing that to the actual results, we get an estimate for each different model that we tried on what the impact of that specific campaign was. So, our approach here was basically to try and avoid some of the tricky issues that you might run across with the problems of data quality based on ad impressions, mixing and matching different sources, and trying to get to a rough estimate that should just take into account the overall performance.

The tools that we used were relatively simple. So, we used Supermetrics for Google Sheets. I don’t know what you actually call it, but the…

Anna Shutko:

Supermetrics for Google Sheets.

Esa Tiusanen:

Yes. Supermetrics for Google Sheets. So, we used that to actually extract the data. We had several different media sources that we were pulling data from. Essentially, we were just pulling one table at a time into a large spreadsheet. We actually ran out of space at some point, so we had to also then select specific sets of data into a collection sheet later on. So, we started working around how we’d clean out and filtering, collate all the data into one table where we actually then do the analysis. So, we did that on Supermetrics. Once we had the output sheet in a form that we want, then I used a tool called Orange Biolabs, which is a machine learning platform where I used a custom Python script where I could essentially just read the Google sheets data table into the platform and then I could create the models there, run the analysis and do some visualization on that side.

Yes. I did actually also run a quick output prediction model that I actually pushed into a Google sheet as well. So, I created a small feedback loop there, but that didn’t end up being too useful. So, we didn’t use that too much, but I did also build in the Google sheets back so that it could also read the output of what we did in Orange Biolabs. So, a pretty simple, two-tool stack talking together between Google sheets and the machine learning platform.

Anna Shutko:

All right. Awesome. So, my next question here would be, could you please describe on a very high level which machine learning models you’ve tried and which of them worked, which didn’t, and why?

Esa Tiusanen:

Yes. So, I tried several different models. So, there are algorithms to create the models before I ended up on the best-performing ones after doing some fine-tuning on the four main approaches that I was trying. So, I used a linear regression model that was the eventual winner. I had an elastic net regression model there that eventually performed the best of all the models that I tried. I also worked with just a basic decision tree model and also just fine-tuning that with a random forest classifier. The random forest classifier was eventually the second best. It was almost as good as the linear regression model, but just a little bit short. The next best one was k-nearest neighbors. That wasn’t on a level that I would actually be happy in using any kind of actual modeling case. So, that was pretty clearly underperforming and that was one that I dropped really quickly.

Esa Tiusanen:

The results that we got from all the different models were relatively good, except for the k-nearest neighbors’ algorithm. That one wasn’t performing. I didn’t really even try to fine-tune that too much. My guess is that that one may not have worked that well because of some large siege and little variations that we have in the industry that the client is working on. So, if it was expecting too many similar days to the ones that are really outliers for the general trends, It might be a part of the reason why it doesn’t perform as well. My assumption is that the later model also worked best because it tends to smooth out something of the variation so it doesn’t try to be too confident, which might get to be a problem with the other algorithm.

Anna Shutko:

So, you mentioned large seasonal variations in the data, which I think is a very important thing to consider. Could you please tell us a bit more, in your opinion, what should analysts pay attention to when they are trying to structure their attribution model analysis process? Maybe there are some things like I mentioned, a large seasonal variation, or any other data anomalies they should take into consideration.

Esa Tiusanen:

Yes. So, the seasonal variation tends to be one of the bigger things. So, I did end up also trying the underperforming models as well with some seasonal variation cleaned out of the data. So, just removing some outliers there. The performance didn’t improve drastically, but it did improve. So, I’m assuming that there’s just more of the variation than I was able to clean out based on the seasonalities. Yes. I think my main tip for anyone who’s looking at these kinds of projects trying to create attribution models is just to try and have a look at a broad range of ideas that could work. This field is really something that you don’t have any single source of truth that’s going to be completely accurate. There are no 100% accurate attribution models and it shouldn’t be too high. I think the idea is to get actionable results out of your models.

Esa Tiusanen:

So, try several different approaches, see how you can actually improve on the results that you get from the models. Eventually, you’re going to find some models that tend to speak of the same broad concept. As long as you don’t expect two specific predictions, I think you’re going to get good, actionable insights into which campaigns are performing and which aren’t. So, don’t look for silver bullets, just try and work your way towards the model that’s most suitable for the case that you’re working on.

Anna Shutko:

All right. Yes. These sound like awesome tips. You mentioned that there is no silver bullet, so the analyst should try and test different models to see which one works. Are there any typical mistakes people could run into while they’re trying to find the model that works best for them?

Esa Tiusanen:

Yes. That’s a great question. I think one of the key points that I ran into while I was looking at my model was I started off looking at too short of a time period first. So, when I’m looking for attribution windows, I’m of course looking at a longer time period over which the conversion might actually happen. That’s the entire point of looking at longer attribution windows. But when I started off analyzing the data and trying to build the models, I was focusing on day-to-day sales, which obviously are going to have lots of background noise just because it’s a shorter time and the attribution actually does work on a slightly longer time period. So, being mindful of what you’re actually trying to accomplish and what level of data that you want to get your outputs from is definitely something you should have a look at. Other than that, I think really just having a look as broadly as you can, waiting out the best solutions that seem to be giving a similar picture to each other. I think that’s really the biggest key issue when you’re working with attribution modeling.

Anna Shutko:

All right. Sounds awesome. Thank you for the awesome chat. Now, if there are listeners who would love to learn more about you, where can they find you?

Esa Tiusanen:

Yes. So, I recommend everyone go to columbiaroad.com, have a look at our company. So, we are an e-commerce consultancy working in Finland and Sweden at the moment, as well as some other European countries. If needed, my contact information can be found there. So, please give me a message if you want. I’m also available on Facebook, Twitter, LinkedIn as is Columbia Road as a company. So, please follow us, like, and send us messages if you have anything you want to know more about.

Anna Shutko:

Awesome, Esa. Thank you so much for coming to the show today

Esa Tiusanen:

My pleasure being here. Thanks for inviting me.

Anna Shutko:

That’s the end of today’s episode. Thanks for tuning in. Before we go, make sure to hit the subscribe button and leave us a review or rating on Apple Podcasts, Spotify, or wherever you’re listening. If you’d like to kickstart your own working analytics, check out the 14-day free trial at supermetrics.com. See you in the next episode of “The Marketing Analytics Show.”

Stay in the loop with our newsletter

Be the first to hear about product updates and marketing data tips