The next generation of marketing mix modelling

In this episode, our Global Brand Director, Kate Gleeson, speaks with Zach Bricker, Lead Solutions Engineer at Supermetrics, and Igor Skokan, Marketing Science Director at Meta, to discuss marketing mix modeling and how you can use it to measure the effectiveness of your marketing inputs.

You'll learn

  • What MMM is
  • How it’s changed over the time
  • What types of data are used
  • How to validate for accuracy and reliability
  • Future trends in MMM
  • Advice for companies looking to implement MMM

Subscribe to the Marketing Intelligence Show

Learn from Supermetrics' experts how to use data to fuel growth and maximize the ROI of your marketing spend.
Subscribe to the Marketing Intelligence Show

Transcript

Kate Gleeson:

Hello and welcome to our session on the next generation of MMM. We're going to be discussing the latest tools, innovations, and methodologies that are revolutionizing marketing mix modeling, providing marketers with deeper insights. More accurate decision-making capabilities and better return on investment.

Today, I'm joined by Igor Skokan, who's the Marketing Science Director at Meta, and Zach Bricker, who is the Lead Solutions Engineer at Supermetrics. Iggy, Zach, it is great to see you. Thank you so much for joining me. I'm going to get straight into it. Zach, question for you. What is MMM?

Zach Bricker:

So MMM stands for marketing mixed modeling, and it's a statistical analysis technique that aims to quantify the impact of a bunch of different marketing inputs and in some cases, external inputs, in order to better examine historical data, deconstruct all that data and then measure the effectiveness of your different marketing campaigns and marketing activities.

So whether the marketing activity is, I have my campaign or I have a creative that's changing, it helps to quantify and give you information in order to better your ROI. 

Igor Skokan:

I would just add that really for me and for us at Meta, we've seen that it's a really powerful tool to measure the business holistically. So not only the media or the marketing, but also the macro factors, actual factors, things that are sometimes even out of control and it really helps to understand the business performance overall, and then perform the what if analyses and budget allocations and other things on top of it.

So sometimes people talk about MMM as media mixed modeling. For us, it has always been marketing mixed modeling and really business meeting business modeling.

Kate Gleeson:

So we know that MMM has changed over the years and Iggy, I'd love to hear from you in the key ways in which MMM has changed.

Igor Skokan:

MMM is a very old technique. It has been around for maybe 50 years, maybe even more. So, it way predates the internet and it's actually interesting because now it's sort of reemerging as a privacy-first aggregated type of analysis, and it's because we never actually had any of the data that is available or was available for a short period of time.

So, the old ways of MMM, it was very much handmade, a lot of craft, a lot of building by the analysts. It was slow. It was sort of validated only statistically. And consequently, all of these things, because it was such a hard, such an artisanal experience, made it very much expensive.

But whenever there are challenges like this, I'm sure you would agree and people in the SuperSummit would agree, that digital skills have a track record in disruption. So MMM is changing. It's being digitally enabled, and is faster, shorter, calibrated and increasingly using automation with machine learning. It's like the next generation of MMM.

So whatever was hygiene in the old days, we're now discussing data granularity and high precision analysis and high accuracy and really the methodological improvements to the underlying regression. So I know at heart, it's statistics. But combine this with modern machine learning and cloud processing and it really transforms. And this is where we are today.

Kate Gleeson:

So let's talk data there. What types of data are typically used in MMM and how do you gather this data? Zach, I'm going to put that question to you.

Zach Bricker:

So one of the primary things that you see is your sales data and your media spend. Those are the two base information points that you want to start bringing in when you think about an MMM. And this is how do I go from small and at least start something to start to push up to those larger MMMs where you are taking into account things like macroeconomic indicators, which Iggy had mentioned previously, competitive data, and then your additional digital metrics and promotion data that you might be sending. 

And the way that you gather this data is now primarily through APIs. Previously, what you would see is you would have to wait 30 days or 60 days for your television commercial or your radio ad spot or your billboards to report data back to you and say, okay, this is what you spent in these. How many people passed your billboard? Or this is how many we've estimated have heard your ad on the radio. Well, now we can get that data much faster through APIs. We can get it in a much more verifiable way and a way that we can trust our providers and say, hey this data is going to be accurate.

And one of the real benefits that you start to see when you work with these APIs is you can also speed up the rate at which you tune or refactor your MMMs to account for these rapid changes. And the way that you want to ingest that data, coming from those APIs is preferably with a strong platform, either a third party or that you build yourself.

It will help alleviate some of those issues with trust as well. In previous years, you were relying on the analyst, and you were relying on the data coming from those older platforms. And now that we have data coming from these newer platforms that suffer less from bias, we can also start to trust that data more versus the previous iterations.

Kate Gleeson:

So obviously, when it comes to data, accuracy and reliability are central to data. Are there methods that you can use to validate the accuracy and reliability of MMM models? Iggy, any thoughts there?

Igor Skokan:

Data is the most important thing, having the right granular and correct and high-powered data is really important. And for us at Meta, we actually have developed our MMM feed very recently. The feed existed for a while, but this is the data that is designed specifically for high grade, high cadence, granular, fast modeling.

Since June, this is now available via the MMM APR, the MMM endpoint of Insight API. So it really enables that kind of fast modeling that Zach was talking about. But beyond the data, the second or probably the ultimate most important thing is the validation of MMM as a result.

So how this is done, and the calibration of these results, is via real world incrementality experiments. These are especially easy to do on digital and, say for Meta, we have our conversion lift. Zach mentioned trust, I described the memory of the old as a handmade crafty thing. So as a consequence, this has a lot of analyst bias, a lot of the analyst personality counts through the mods. Now, craft in MMM is not good, you don't necessarily want it there. So complete removal of bias is not yet achievable. We haven't quite got there yet. But it is possible to minimize this by validation and collaboration with real world experts. 

To ensure validity, the models have to be statistically robust. But this is where the old models finished. They will tell you R squared is this much or mean error is this. And people would say, okay, this is not a good model, but this is actually not true. To ensure accuracy, statistical validity must be there.

There are even deeper metrics than R squared, but models have to be calibrated and validated using real world experiments and tests. That, I think, helps us to triangulate the data coming out of MMM, because at the end, it's a model triangulated with the real-world experiments, and this is how it's done.

Kate Gleeson:

So we've talked a little bit about this big shift in MMM and we know that there's been some seismic shifts that are happening in the marketing landscape, and in technology. What do you see as the future trends and developments in MMM, especially with regards to those new shifts happening in the marketing landscape? Zach, any thoughts on what those future trends might be?

Zach Bricker:

I see a lot of the future trends coming with what you see every day. Everybody, it seems, is well aware of the different LLMs that are out there – ChatGPT, Bard. All the large tech organizations are coming out with a version of an LLM.

What we're really seeing is the evolution of AI, and we're seeing it happen very quickly. As with most things in computing, we plateau, and then it accelerates almost exponentially. And one of the cool things you see with that especially is with the creative aspects of it, so being able to generate images. 

I see going into the future that AI and deep learning are going to start to inform and really take off, and as Iggy mentioned, start to help remove more and more of the bias in these different AI and different statistical models, not just MMMs, but other ones that exist as well.

You have ones that measure, like Beijing causal factors, and so as you integrate those, you're going to see more of that coming through. As I mentioned as well, the real time nature of it, and Iggy mentioned the stream. That's going to be a really interesting end point to have, because what you had to do previously was you would generate an MMM as Iggy talked about crafting one, and if you had a really strong computational center back in the day, you could maybe retune these once every three months or longer, depending on how far you went back. 

But now that you're going to have data flowing in, you can set up an entire pipeline, you can really have those MMMs running on a fairly continual basis as you adjust new data and respond, not just to the large shift in changes that happened over a month or a quarter, but ones that happen every single day. 

So as with most things that we see, we tend to be getting faster and faster at producing those results. And hopefully, we've started to tune for actionability in our MMMs versus just informationally. One of the big things that I've seen coming from MMMs of the past was they're really giving you information, and if you had someone who can interpret it well enough, you had a little bit of insight and some actionability coming from data, whereas, as we mature, actionability and insight should be the primary things because data largely is great to have, but it's hard to really utilize.

You have to turn it into information and these models help do that. You're always going to need that person on the other end to really lend that insight of this is my model. This is what information I have, and then how do I turn that into a net positive for our organization? How do I take these descriptive and prescriptive approaches and really apply that to get to an ROI that I'm looking for? 

Igor Skokan:

I would just add a couple of things. So first one, a major trend that we see is that this is a technique now being used by a much wider range of businesses. In the past, it was CPGs, retail and so on. And now we see the contemporary modeling being used by gaming advertisers, app first advertisers, digital natives, omnichannel, and many, many more.

And actually, this is a message to all of you listening, sometimes we see the CMOs saying, we've seen MMM, we've done MMM in the past. I urge you to revisit this and try to see the progress that has been made in the last 18 to 24 months in MMM. It’s been tremendous and it's a completely new kind of experience that you can get.

As I was mentioning with these automated, semi-automated, ongoing experimental calibrating machine learning models, it's a very, very different thing. So we also see companies having much more clear data strategies and infrastructure that are built intentionally for these kind of models, be it APIs or variable creation or ongoing experimental calibration - systems in place that allow to power the kind of technology that does this. So, solutions now exist – for an example, one by SuperMetrics. But look around. We predict that this space is growing, will continue to grow in the near future. And we see many more semi-automated modeling solutions that can be run, calibrated, and operated on a continuous basis, and this is the kind of trend that we see.

Kate Gleeson:

So Iggy, you mentioned there that CMOs might have a perception of MMM isn't for us. Why do you think that perception exists?

Igor Skokan:

Some of the CMOs may have experienced MMM in the past, and that was as we mentioned earlier. They had a lot of challenges, it was slow, it wasn't actionable, it wasn't giving the kind of information that is needed, or it was coming every 6 months or every 12 months, some kind of high-level stats.

So it may be just that they have not experienced the modern MMM. And then the more contemporary CMOs may have not come across MMM because they, in the past, were using attribution, MTA, or identity-based models that are becoming less and less viable, if not completely unviable going forward.

Kate Gleeson:

Okay, so there's definitely been some significant changes and a lot of opportunity there. I think Zach, you touched on the magic three letters, which is ROI. But I would put this question to both of you because I think a lot of people attending and listening are going to be very curious about how businesses calculate the ROI for specific channels based on MMM findings.

Zach Bricker:

Yeah, so your ROI, it's actually pretty straightforward. You look at your incremental sales from the profit margin, right? You subtract that by your costs and then you divide it by the cost of the channel. So, the basic ROI is very, very easy to describe. It's very easy to do. But there are all those externalities that happen. 

We're talking about MMMs and historically MMMs have not been cheap. Now with new SaaS platforms, faster ways to do things, Iggy mentioned the semi-automated, automated platforms, we can bring those costs down. But you still want to layer in the cost of how much this takes up of my time and my team's time. And then how much are we spending on this to really find the right platform in the right way in which you want to implement this?

Because the last thing any business wants to do is spend an exorbitant amount of money on a research project that ultimately costs more than they're getting back. They raised sales by 1%, but it costs you 8 percent of your total budget. So you want to avoid things like that, but that's how they would start to calculate the ROIs for those findings. 

Igor Skokan:

We say that any MMM model investment is only worth it as much as the positive actions that you can take as a result of this model. The best models have actionability as the purpose of the model and they ensure forecasting and simulations are run often and positive actions are taken out of this.

The very best models that we've seen operationalized are those built in partnership with the CFO. So, when CFO and CMO come together and they share the model idea, the model goals and they co-own, they co-create, and have a mutual understanding of the methodology and what we are going to do with these results.

We have seen the transformation and growth and eventually, marketing channels’ ROI increase. So I would say that when you are thinking about the ROI of the models, think about the actionability and request this simulation, ongoing information flow back into the business so you can take positive actions out of it.

Kate Gleeson:

Makes complete sense. So you have talked a lot about the shift that's happening and the opportunity with MMM. Let's say a business is looking to implement MMM for the first time, what advice would you both give them? Iggy, I'll start with you. Any words of wisdom?

Igor Skokan:

A lot of companies ask us this. Can we build this in-house and what's the most cost-effective way? Zach mentioned it a couple of times. MMM has this reputation for high costs but is in-housing any cheaper? It really depends on the goals, the vision and the current talent that the organization has.

In-housing is a totally feasible journey, but to be clear, having built in-house requires expertise. You need to have statistical, business science, data science expertise. This needs investments and people and tools and culture to retain this kind of talent. But it's also true that analysts and data scientists are much more available or there's more people in the skill set than five, definitely ten years ago.

So, in-housing MMM, like many things, can lead to more flexibility and more control over the data. You can add a KPI, remove a KPI, test this, test that, have a nested structure, whatnot. It's closer than to the proprietary, sensitive data about your business. So that's also good and having a deep understanding of the model. 

But on the other hand, there are some incredible specialist companies and consultancies out there with the latest and the best contemporary MMMs. They are transparent, validated, incrementally calibrated, they know what they're doing, high cadence of high-quality insights. So really just look around. 

You could also go hybrid. We've seen this, that partnering with some of these consultancies or companies on a SaaS basis. We've seen this with ad tech, right? So our businesses have their own ad tech relationship, but it doesn't mean they don't use a specialist agents or a specialist partner. They just use them in different ways. At Meta we have also built an open source MMM code. We call it Project Robin. It was released on GitHub to the data science community.

But you still need a lot of skills in operation. I would say the advice that I give people, set yourself a three-year plan, regardless of where you are today. Start with a collaboration with a trusted MMM provider, look at your vision, look at your goals, look at your talent. And it may be that you end up having an in-house model in three years, with this part of being a consultant. Or you might just like their product and tool and just decide to keep it that way. So really, it's all feasible.

Kate Gleeson:

Awesome. Zach, any advice?

Zach Bricker:

What Iggy had mentioned are all of the different steps that I would also take in order to start out with MMMs. One of the primary things he mentioned was starting small because you don't want to make that large investment into a tool just with any other software tool or platform, you don’t want to make a huge investment without having a goal in mind. So start small, make sure that what you're doing is going to produce results for you, or if it's not, that you're going to be able to exit without as much of an impact on your business. The reasons you would exit would be you can't ensure clean data for any reason or you're not able to stay updated or you don't have the resources to manage even a SaaS platform. That happens. Resourcing is always important and always a challenge that we face. 

The next thing I would do is ensure that your data is clean and comprehensive. When you're bringing your data in, because it's going to be one of two ways, you’re working with a consultancy in that hybrid model, where you may be bringing them data. So if you're bringing them data, their results are only going to be as good as the data that you acquired. The same way as it is in-house, your data scientists and your analysts are only going to be able to build a model as well as the data is provided. 

Or you have that other method where you go fully automated with a platform and they're pulling your data and making sure that the way that they're pulling your data in, once you've given them access, is going to be clean, comprehensive, and have a lot of checks and validations to make sure that they're not pulling in data incorrectly that they have necessary information to validate your data. 

So that could be the campaign naming schemas, all the different dimensions that they're going to need in order to break down your data into the categorical and explanatory variables that they need. So those are the first sort of two big things that I would recommend. 

Iggy had mentioned the experts in the field. Whether you are hiring an expert on, you're hiring an agency, or you just want to partner with one expert, even if it's just one person, to help you map out your strategy so you have an expert to go in and analyze where your weaknesses are, what you need to do before you even start doing an MMM, in order to, give you the highest probability of success. 

And then obviously, the big things are staying updated within the space, making sure that as the landscape evolves, that you're also being sort of constantly revisiting and recalibrating your models. You should be able to do that. You should be able to do real-time and then near real-time. Depending on the volume of your data, it may not be necessary to do it every day. The advice I give to a lot of our customers when they ask about real-time data and, depending on my data set, how often should I be looking at my data?

For me, reasonably, it's the speed at which you make decisions. If you're not making decisions on your spend every day, it's not really necessary for you to be worried about a real-time model. If you're making decisions every week, look at it every three days because you also don't want to overwhelm yourself.

There's a very famous saying inside of statistics of paralysis by analysis. And so, if you focus on the real-time nature of things, you don't want to get too involved with it where you're just waiting for a number to tick up by 0.1, right? And if you are in that space and you have that amount of data and you can benefit from that and build an automated system, do it. 

And then, lastly is integrating insights. I'm a huge proponent of software and platforms as a service, of automating things, but from my data science background, there is a lot of value from my perspective in being able to integrate insights from subject matter experts.

While I may be an excellent data scientist, and I might have a great team that can incorporate a lot of what we learn, at the end of the day, I am not a performance marketer. I am not the creative director who can analyze what this MMM or what any other model has given me and look for those situations where it's not incorrect, but it needs more time to mature based on what they've seen in the past.

Sometimes creative takes longer to trigger, so make sure that, while you are trusting your data sources and hopefully you've got a trusted partner, your in-house team, or if you're going with the hybrid model, that you also make sure not to discount the human element of it.

You want to make sure that you are bringing in someone who is an expert in the field that you're measuring. 

I know Iggy mentioned that they don't look at it as strictly for marketing, but all of the media to make sure that people who are experts in that are involved and that they're looking at it. Because there are the ones who are going to be able to really take that model, look at the insight and actionability that it's recommending, and apply that to your strategy in the best way they can.

Kate Gleeson:

Awesome. I think that's some pretty good advice in terms of next steps for businesses, and I think you both certainly highlighted that the challenge of MMM in the past presents a really, really great opportunity for today. 

So yeah, fantastic. Iggy, Zach, thank you so much for this session. I think if an organization is ready to move past gut intuition and into a more future-focused marketing strategy, then I think the next generation of MMM might be the right solution for them. So I really appreciate your time. Thank you so much.

Turn your marketing data into opportunity

We streamline your marketing data so you can focus on the insights.

Book Demo