Marketing attribution in a privacy-first world: MMM, incrementality testing, and triangulation with Andrew Covato
In today’s episode of the Marketing Intelligence Show, we delve into the world of marketing measurement in a privacy-first environment. Evan Kaeding chats with Andrew Covato, Founder and Managing Director at Growth by Science, to expose the limitations of traditional methods like last-click attribution and retargeting.
You'll learn
The limitations of conventional attribution models and their potential to overstate channel effectiveness.
The power of Marketing Mix Modelling (MMM) for uncovering the true impact of marketing across all channels.
How to supercharge MMM with incrementality testing for unparalleled precision.
The "triangulation" methodology for a comprehensive and actionable view of marketing performance.
Subscribe to the Marketing Intelligence Show
Learn from Supermetrics' experts how to use data to fuel growth and maximize the ROI of your marketing spend.
Evan Kaeding: And here we go. Thank you so much, Andrew, for joining the show. Welcome. Would you like to introduce yourself to our audience?
Andrew Covato: Yeah. Great to be here. Thanks for having me. My name is Andrew Covato. I'm a measurement and growth consultant and advisor to ad tech both on the buy side and the sell side platforms. I've been in ad tech for about 15 years. Worked at a number of big tech companies, Google, eBay, Facebook, Netflix, Snap. Always looking at marketing analytics, marketing science. And I've really found a passion and a niche in helping advertisers and helping platforms figure out how to assess the ROI of advertising. Digital advertising investment specifically. I think there's a lot of work that the industry has done over the years, but I think there's still a ton of work that needs to be done in that space. That's what I'm here to do to try to help some of those initiatives along.
Evan Kaeding: Really exciting to have you on the podcast just based on your breadth of experience that you've had on the on, let's call it the ad tech vendor side. So being on the publisher side with Meta, with Google, with a couple of others. So it sounds like you've done a tour of big tech, if you will, and probably seen a lot of the different ways that media can be bought, sold. But before we dive into some of the specifics that you might want to get into, would love if you could just fill us in on some of the work that you're doing today and maybe the background that you've had that's led you there?
Andrew Covato: Yeah. Yeah. For sure. I've recently founded a growth consultancy that we're calling Growth by Science. And a lot of what I do today is really help demystify ads measurement and really contextualize it with respect to some of the privacy changes that have recently come about. So obviously those changes, I'm sure we'll get into it in excruciating depth here, but they've really thrown the ad tech industry for a loop and I think a lot of folks both on the platform side, the provider side, and on the growth marketing side are struggling to find an appropriate path forward and how to assess their marketing measurement. And so a lot of the work that I do at Growth by Science with our clients is help stand up what I call scientific growth programs for growth marketers. And then on the flip side, help ad platforms and solutions really with their go-to-market, how to present a more scientific approach to assessing ads that are on those platforms for their advertising clients.
So a lot of math, a lot of statistics, and definitely a lot of strategy on how to navigate that because I think there's reticence on both sides and the buy side and on the sell side to make big changes to the way that things have been done for decades really in digital marketing, even if there's a catalysts like some of the privacy changes that forces people to rethink some of those old-school paradigms.
Evan Kaeding: Yeah. Like you said, there's certainly a lot of inertia just based on the way that things have been done historically. But when you take a look at the environment with Growth By Science and you take a look at the established methodology for reporting on conversions or ROI, I think it's probably easy to understand that some of those methods are tried and true but may not necessarily be as robust to future changes as maybe many marketers think. Could you tell us about some of the most common methods that marketers are using that either today aren't quite as reliable as they may think they are or might be even less reliable in the future?
Andrew Covato: Yeah. Yeah. It's a great point. I will say one thing that I maybe don't fully agree with in your intro to this question is that some of these are tried and true and maybe privacy is the only thing that is causing them to be less robust. I would say that while that might be true at the start, the nascent of digital marketing, over the past number of years, call it even decade, a lot of the original ways that we measured at efficacy have become less and less accurate. And so to explain what I mean by that, let's go back to the beginning and think about the first ads that were out there. Search ads, right? Banner ads potentially, but really it's one of the few places that when you're on your dial-up modem and it takes five seconds for a web page or even longer for web page to load up, you're going to see a little banner ad there. If it catches your eye, you would've clicked on it and maybe gone and converted somewhere afterwards. You weren't at that time being bombarded with a plethora of ad platforms on multiple devices. That just wasn't the case.
So I think if you go back to those really, really simple times, it made sense that if you clicked on an ad and then subsequently did something like put in your email address or bought something online that you could reasonably credit that ad for having elicited that action from you. And so that's what this whole rubric of post-exposure attribution, which is what I like to term this idea of crediting ads that had some interaction with the user, the preceded that user's conversion.
And then that paradigm has gotten more sophisticated or got more sophisticated over the years. You had multi-touch attribution MTA that played in there. People realize that hey, people are clicking on multiple different ads and so maybe it makes sense to credit them in some way. And so you can see where the logic was of let's try to give more credit to some ads that maybe introduced a brand to a person and whatnot. But that whole process of entrenching ourselves in this idea of after you see an ad, you can reasonably assume that that ad caused you to convert. That caused this feedback loop with the ad platforms that started using those post-exposure signals as objective functions to optimize their ad delivery. And so when you are measuring and optimizing on the same signal type, it's going to give you this rosy picture, this self-fulfilling prophecy, which I like the term. And so that's the situation where we're at today. Where post-exposure attribution is feeding optimization. Delivery optimization has been on all the major ad platforms. And so they're getting really good at predicting somebody's behavior and then showing ads to people who already are predicted to perform the action that the advertisers care about. So you're not really understanding the true causality of those ads. And it's really this, again, this self-fulfilling prophecy.
Now let's talk about privacy. So obviously MTA, multi-touch attribution requires this idea of a path to purchase, which if you think of what that is in the modern ads ecosystem, it's a series of touch points that the user has had with various ads across multiple devices potentially. So in order to really have that, you need an identifier that can stitch all of those ad exposures together. Privacy changes, ATT GDPR to a certain extent, CPRA and the various other privacy regulations that are coming up in different states in the US are essentially making that process really, really hard. And the death knell was ATT with Apple's app tracking transparency protocol where you can't just by default access the IDFA, the identifier, the mobile identifier. Advertisers can no longer access that without explicit consent on both sides. So the source and the destination of that data. And so that means long story short, that you can't have reliable path to conversion data sets anymore, and therefore MTA does not work anymore. So let me summarize all this where we're at. MTA was a not causal bad methodology, even the context of perfect data. Now it's a not causal methodology with crappy data. So if you combine those two things as a compounding of errors and uncertainty, that really should make most marketers extremely dubious of anything that is post-exposure, and that includes even platform metrics that are post-exposure. And there's a whole slew of other things we could go on for another hour about why those are now in a shoddy state. But I'll pause there because I know that was a lot. You can cut that down. Feel free to trim that down later on, but I figured I'd give you more to work with.
Evan Kaeding: Yeah. I think you made a couple of really good points there. And I think there's two really that I want to highlight and emphasize, and maybe we can go deeper on both. Number one, you've pointed out that post-exposure attribution, like post-exposure conversions, that's one element that we just have problems with, number one. So the actual conversions that are reported from the ad platforms, we have some problems with that for a variety of different reasons. Interested to dive deeper on that. But then simultaneously, as you note, causality is a really important piece because if someone was going to come to your website and buy anyway, well that ad is then getting credit for those. So maybe first let's talk about what are some of the problems with post-exposure conversion reporting on the platforms themselves? And you've actually had the privilege of working at some of these companies as well. So what's actually happening behind the scenes at Meta, at Google, for example, in ways that you can describe for our audience?
Andrew Covato: Yeah. I think the easiest way to understand this is to look at what ATT and similar protocols for other OSes, what that's preventing. It's essentially preventing conversion signals that happen off the platform. So on an advertiser website from being linked to an exposure that happened on the platform. So an ad impression on Instagram or whatever. So because that linkage has been broken, you no longer have this high fidelity exposure conversion data set. So a lot of platforms ... And I won't necessarily call any of them out by name, but most of them. Have this new idea of, hey, we're going to estimate which impressions have had a subsequent conversion with some probability. So they're essentially modeling out conversions that they can't explicitly see from a deterministic connection, from a deterministic perspective anymore. They're estimating whether or not those would've happened. So you combine that with the fact that post-exposure is already an inherently bad methodology or we can say at best, inaccurate, worst flat wrong. You combine that with some model behavior. And I've used this analogy before, but it's like you're making up a question, writing an answer to it, and then grading yourself on the question. That's the best analogy to explain the situation that's happening here.
So if I'm a marketer right now, I would not at all look at platform first-party signals as any objective measure of success of the campaign. I think there is some utility to those signals, and once we talk about where we should go and what are some of the solutions to this problem, we can explain how they might fit in. But anybody that's looking at their CPA on any platform and saying, okay, it's high, therefore it's bad or it's low, therefore it's good, is missing a ton of context. And probably their campaigns are very far from optimal.
Evan Kaeding: That might come as a shock to some of our listeners that some of the conversions and platforms shouldn't always be trusted or many might actually have been dubious of them for quite some time as well. So depending on which side of the fence you're on, you're probably naturally going to be looking for some alternatives. And I know based on my experience in this space, some of the alternatives might be looking at your analytics tool, for example. A Google Analytics, a Piano, a Matomo, an Adobe Analytics, something like this. Or you might be looking at your CRM or your E-commerce receipts, for example, or your orders. Are these tools better, for example, for tracking what your conversions are, especially if you're able to get down to perhaps a last quick level?
Andrew Covato: I would say generally not better. Again, all of these are ... There's some utility to the first party conversion in the platform. There's some utility to what you're getting out of those third party analytics tools. But looking at them in terms of an objective number and trying to associate that back to some post-exposure, whether it's last click or otherwise, that is not going to give you, if you just look at that number as an absolute value, engage your success of your campaign on that. I think that's really going to lead a lot of folks astray. So those third party analytics software, what they could be good at, determining a relative mix of traffic source. That's a data point that you can use if you have it set up correctly. And by the way, 90% of folks out there don't. But if you do have them set up correctly, they can be useful to look at the overall volume of conversions.
But ultimately you want to go to the most trustworthy source, which would be, in my opinion, your E-commerce store hard dollars in that you're receiving and that you can see showing up in your bank account. That's the ultimate source of truth. And if you start there and then work backwards into where you're willing to sacrifice some trust or insert some fuzziness, I think you'll find that by the time you get to those third-party tools or first-party metrics and ad platforms, you've lost a ton in between there. And so there's I think other paths that can be taken to really understand your marketing performance that don't require a lot of that. They require a mindset shift, but they don't require really granular data and intricate data matching.
Evan Kaeding: Yeah. And that might be a hard truth for some marketers and some businesses where, for example, if you're in the manufacturing space or for example, food and beverage, you're making products, you are running advertisements and well, ultimately you don't necessarily own that transaction data. That's owned by a retailer or potentially a distributor. So your success metrics, well, you're either going to have to have really good relationships, partnerships or look at something else in the platform realistically to gauge the success of your campaigns in many cases.
Andrew Covato: Yep. That's a good point. I would say ironically, the folks that are in those shoes I think have been ahead of the curve because they've been forced to look at methodologies that aren't very data-obsessed. People that are digitally native brands like digitally native brands over the last few years have had this obscene luxury of being able to look at so much granular data. And I think they've gone down these rabbit holes of analyzing it and creating business logic around those path to purchase data sets that they've been able to acquire that I think has actually divorced them from the reality of the true efficacy of those campaigns.
Whereas I think of CPGs, they've been doing things like MMM, which I'm sure we'll get into for decades. And that's been an extremely scientific way to understand their marketing programs. And that's precisely for the point you brought up Evan, which is they don't have access to first-party sales. They have to look at potentially getting data back from some of their partners or look at panel-based solutions or have some combination thereof. So you'd argue that those folks have at least had the right scientific approach to looking at marketing efficacy. And I think now is the time that some of the more modern brands, the folks that have had this data capabilities and the data access, it's time for them to start learning what is tried, tested, and true, and seeing if there's modern ways to apply that to their businesses.
Evan Kaeding: Well. Yeah. And I think somewhat counter-intuitively, it makes sense because as we know, constraints tend to breed innovation, and if you're constrained by not having that first-party sales data, then you're really forced to look into other methods for assessing the effectiveness of your advertising.
Andrew, I'd love to dive into the other half of what you had mentioned. So we know that conversions from platforms, and based on what you said, certain analytics tools and a variety of other sources may or may not be as reliable as maybe we had previously thought. The other side of that, which you brought up, which I think is extremely important, is we don't necessarily know, even if there are conversions that are measurable that are real, were they actually causally influenced by the marketing that we're putting in front of our customers? Let's dive into that a little bit as well. Could you give us a sense of what some of the ways that marketers can start to ask questions around the causality of their marketing might be?
Andrew Covato: Yeah. So I think I'll mention an example or a couple of examples. They're out there, and again, I won't mention these companies by name, but anybody is welcome to go research this. It's easily findable. There have been over the years a number of massive, massive advertisers that have questioned whether or not massive amounts ... I'm talking about hundreds of millions or even billions of dollars of spend on major ad platforms if they are accretive to their business. And there have been a number of cases over the last 10 years. And I've even had a part of some of those where these advertisers have undergone testing, actual holdout testing. Hey, should we turn these ads off and see what happens? And more often than not, nothing happens.
And so you get these scenarios where huge swaths of advertising is just shut off because advertisers are realizing and in the process of doing experimentation, hopefully ... I know I can speak at least for some of these, that they were done in a scientific way that you don't get incrementality all the time or the incrementality that you get out of these, the ad platforms is not always what you expect it to be. And it's certainly never ... I can say that with pretty significant confidence. It's almost never what is being displayed in the first-party metrics or even in any other post-exposure metrics. There's a lot of data around this. I've conducted ... I don't know how many thousands of tests that has proven the exact same thing. But there's a disparity in post-exposure and in true incremental outcomes. So advertisers need to understand that for that reason that I explained earlier, the ad delivery being optimized on post-exposure being measured by post-exposure, that is its own alternate reality. That is not at all related to ... In most cases. In some cases it can be. But in 90% of cases, especially with larger advertisers, that is not a true reflection of what the ad campaigns are contributing from an incremental perspective. So let's look at a field that requires the most rigorous understanding of causality. It's medicine. People's lives are literally at stake. So what do they use? They use design experiments. People have probably heard double-blinded, randomized controlled trial, placebo controlled trial. You can do something similar in an ads context. You can't do it on a user level anymore. You could have in more of an ATT day, and we can talk about ad platform incrementality testing in a bit. But for now, let's just say that that type of testing paradigm is no longer functional. But you can in most cases run ... Especially if you're a larger advertiser, run a geotest, match market test where you've got controlling ad spend in certain geographies, you're holding it back in others, and you're looking at the relative impact of behavior between those two. Doing something like that. Which by the way is a little guarded secret of some of the biggest ad platforms that are out there that I've worked with most of them. And I can tell you with a hundred percent certainty, the biggest ad platforms are all using geotesting in some way, shape or form. And it forms the foundation. I should say the biggest advertisers out there, not the platforms.
The advertisers are using geotesting as a foundation of their growth programs. And there's differing capabilities in terms of how much you're spending. You can get varying levels of granularity. But I would say out of most of the advertisers that I talk to talking folks that have six figure a year budgets to some that have nine figure a year budgets. Everyone in that range can apply geotesting in some way that gives them a reasonable understanding of the causality of their program. So to answer your question in a long roundabout way, what every advertiser needs to be thinking about is how can I incorporate some type of ground truth experimentation, probably geotesting, how can I incorporate that as some fundamental measure of the efficacy of our platform, of our ad spend?
Evan Kaeding: And Andrew, can you maybe walk us through an example of what you might set up in a very naive experimental design? For example, if I give you an example and say, I am an E-commerce company operating in the US and I'm selling subscriptions to let's say pencils. I'm selling to college students, they need pencils, they need to refresh them frequently, and I need to make sure that I'm getting them out there. I'm advertising online, I'm acquiring customers digitally. What's a good way for me to get started on understanding how effective my advertising is and how causal it might be as well?
Andrew Covato: Yeah. I think one of the most powerful tests that any advertiser could run is to look at the overall incrementality of their entire ad program. Obviously much easier if it's purely digital. Can get a little bit more complicated if there's different mix and there's some out of home or TV or something like that. But let's assume it's mostly digital, mostly addressable digital platforms. So in the example that you described, say it's a national brand working all through the US. You'll need to do an analysis of trends and sales by different regions. And there's actually a really great open source tool that ironically Meta has released called GeoLift. That's a great starting point. But basically what you'll identify or what this tool can help you identify is some markets that you're going to designate as test and some markets you'll designate as control. You'll essentially use some linear combination of the control markets to create a model version of the test market.
And at certain point in time, you'll do something different to the test market. Either you will light up ads in the test market or you'll turn off ads in the test market and you will use that control, that model control ... We call it a synthetic counterfactual. You'll use that to predict the behavior that should have happened in the test market, all things being equal. And if you've done something that truly makes an impact on your sales ... So say you've turned your advertising off and you've kept it on in the control market. What you'll end up seeing is a dip. A dip relative to the predicted modeled out test markets that the control is generating. And that dip will be your incrementality. Or conversely, if you turn that's on, you keep them off, you'll see a jump or something like that. So I think something like that is the easiest first step to do. And I say easiest because it's the one that requires the least amount of math and it's the most easily visualizable. It's still challenging because you still have to do geo-targeting across all of your channels. You've got to coordinate that. So that can be a pretty hectic experience if you're not using a tool that maybe helps you manage some of that.
But in general, it's worth the effort. Even if you don't decide to invest in a tool to help you with this, it is worth the effort to do this test at least once a year. I would say at least once a year, and just see what is the incrementality of your overall program? You can get so much value to that too. You can calibrate some of your other models, even if they're not perfect. Let's say you're using some post-exposure model. You can at least calibrate the total incrementality to be pegged at what you've calculated from that geo-test and then use some of the post-exposure correlative metrics as a relative comparison of channels. It's not perfect, but it's way better than just looking at that in a black box or in a box by itself.
Evan Kaeding: Yeah. It makes sense. And I think setting up that initial experiment and getting that experimental design nailed to the point where you have results that you can trust as a business and that you can take forward is probably going to be challenging for some marketers, but it really ends up becoming a paradigm shift in the way that you think about how to measure your ads. So you might start with, say, testing your entire program at once, turning all channels black, seeing what that dip ends up being. And once you understand that, and once you've seen that, hey, actually it seems like our advertising is incremental to X degree, then you can start to contribute additional tests. Testing things like platform level or even creative or message level incrementality. Is there a reason that you suggest starting with an entire program test rather than a channel by channel analysis?
Andrew Covato: Yeah. I think it answers the basic question, is my marketing working or is it not? And you have two very different roadmaps depending on the outcome of that test. If it's working and working within some margin that feels good or that aligns with the minimum returns that finance is expecting from your marketing, then great. You know that you don't have to make radical shifts to what's out there. You don't have to slam channels off or drastically change mix. All you have to do is tweak. And the test that you would design in a case where the incrementality is there would be, I would say, pretty different and maybe less aggressive than the ones you would design if it wasn't there. And so an example, instead of turning channels off, you could potentially realign mix a little bit and do a test versus control with that type of experiment. You could have a business as usual, which is your existing marketing program, and have challenger type setups that look at something totally different and see if is there a step change higher or lower than that business as usual?But you always have something that you know is working that you can refer back to.
And again, you should retest that because the market is so dynamic. What's working today might not work in six months. So the more frequently you can test these things, the better. But obviously there's a trade-off in doing too much testing. We can talk about that in a bit. But let's talk quickly about what happens if your marketing is not working. If it's really, really bad. You have the option to shut it all off. If it's not incremental, you can turn it off and save money and try to reassess, try to bring it back to life in a different capacity. Obviously, if it's a massive budget ... he ability to do that is dependent on the size of the business. If it's smaller, maybe it's a easier to do that. If it's bigger, you might consider doing just more aggressive media mix shifts. Promoting some channels, turning others off. You'll probably want to dive into a more granular channel-by-channel assessment at that point.
But typically spend is concentrated in a few channels for most advertisers, and if overall, your digital marketing program is not working, probably those main channels are just not being incremental for you. And so that's not necessarily that those channels don't work at all. It's just maybe you haven't found the right setup to generate incrementality, which is very different than the right setup to generate low CPMs on the platform or low CPAs on the platform. That's a totally different setup, and that's why when advertisers make the shift to looking at incrementality, oftentimes they will see CPAs spike super high on the platform, and it's a real brain trip to see that, but you got to ignore that and you got to focus on incrementality being the north star.
Evan Kaeding: Right. Yeah. It's probably a very sobering metric to say the least for marketers and even for finance teams who might find themselves looking down the barrel of really an effectiveness crisis in many cases where historically you thought maybe your CPA for ... Back to this E-commerce example. Acquiring a new customer might be $20, for example. And then all of a sudden when you factor in incrementality, well, it turns out actually if there's significant amounts of latent demand or strong word of mouth, then perhaps your media is actually closer to pushing a $200 CPA, for example.
Andrew Covato: Very common. Very common.
Evan Kaeding: Yeah. Any real world examples that you can reference either anonymously or named based on consultations that you've had with customers in Growth By Science where you've helped them uncover the fact that maybe their CPA or their incremental CPA was far higher than what they had originally anticipated?
Andrew Covato: Yeah. I work with a subscription product right now where we had this exact same situation. It was assessed by last click. Just platform metrics and everything was rosy and looked at the reports that were being circulated and raised up to finance. And those CPAs were modest, certainly lower than the subscription price, so it seemed like things were being acquired at an ROI positive perspective. It took us a while. It took us a number of months to shift the paradigm from last click to incrementality. And ultimately I convinced them, Hey, let's just try this. Let's just turn off the ads in one of the states and see what happens. And here's the math that we're going to use to figure it out. It's almost exactly like what you described it. It went from a moderate mid double-digit CPA to I think it was actually close to a thousand dollars.
And so that caused, I would say, to say the least, a lot of consternation. But we ended up ... And we're still not done with this yet. I'm still working with them to keep chipping away at it. But we were able to have it from an incrementality perspective so far, and it continues to go down. But if you look at the platform CPAs, now, this platform CPAs have tripled or quadrupled in some cases, but yet the incrementality, what I would call the true CPAs is going down. And so now that we've got everybody aligned to that and we're understanding what are the types of things that we need to focus on. Hint, retargeting is not a typical driver of incrementality. And this particular advertiser was doing a lot of retargeting. That's a common place where I would say an incrementality damper is retargeting in many, many cases. It's not in all cases. Some cases it can work, but typically it doesn't really work. If folks out there that are listening want to experiment with paring back certain tactics, I would say retargeting and branded search are probably two of the best places to look at from a possible place to ship budget away from.
Evan Kaeding: Right. And based on my experience looking at branded search and retargeting tend to be two of the, let's call them highest ROI activities. When you look at post exposure, click base attribution in many cases.
Andrew Covato: 100%.
Evan Kaeding: Yeah. I think that could be a very interesting revelation for marketers, for example, who are looking at their CRM or their bank account or their eCommerce site rather than platform reported conversions and seeing realistically that the numbers don't necessarily change perhaps as much as they would've anticipated.
Andrew Covato: Yeah. And those two, if you think about why that's the case for tactics like brand search and retargeting. Retargeting is explicitly out there searching for people who have put something in the basket expressed intent. They already have a super high likelihood to buy. So maybe some of them genuinely do need a little push over the edge, but the way that retargeting works, it doesn't understand. It's not, I would say, as precise that it can determine who out of those people that have expressed intent just need a little push versus who are the folks that are going to buy anyway. And so blank is the mall. Peppers in them with ... And sure, all of us have experienced this. You put something in the cart and boom, boom, boom, buy the shoes, buy the shoes, buy the shoes.
It kills you with that frequency. And so that's why the CPA looks really, really, really good from a last click perspective or post-exposure perspective. And it looks really, really, really bad from an incrementality perspective. Branded search is almost the same thing. Typically, when you're looking up a specific brand, you already have some intent, a higher than average intent to visit the site and possibly buy. So again, it looks really good from a last click, post-exposure perspective, but really bad from an incrementality perspective.
And again, I keep caveatting all these things by like, Hey, it's not always the case. Just like everything in incrementality measurement, which is based on statistics, there are outliers, there are situations where it's outside of the mean and things don't operate that way. And that's the same with some of the generalizations that I'm making. So that's why I would encourage everybody that's listening to this, test it out. You might have a different situation and for your particular business, just there could be some nuances that don't show up in the average marketers set up. So that's why it's very, very important to test things and keep saying it again, but test things and test things often and repeat the same tests sometimes if you can.
Evan Kaeding: I can imagine that our listeners are going to have a very high degree of urgency to start running some tests after listening to this, especially if this is new information for them as well. For those who have been across the marketing measurement side of things for a while, and particularly those who are working on brands that may not be as fortunate as those who do have access to their first party sales data, MMM has been a popular technique for years. You could even say decades for measuring the effectiveness of marketing. And one of the things you touched on earlier was when you are running these incrementality tests, you can actually use that in conjunction with your MMM program. Could you walk us through what that looks like in your experience?
Andrew Covato: For sure. MMM is definitely having a moment and has been increasingly so for 2023. Maybe a little before. I think 2024 could be the year of MMM. It really could be the year of MMM. There's a lot of folks that are out there, a lot of measurement providers that have begun ... They began to incorporate MMM into their offerings. And the reason for that is MMM does not require anything except for aggregate data broken out potentially by some dimensions depending on how model set up. But you don't need to have any complex data exchanges. It's essentially immune from privacy regulations in terms of not needing that granular user level data. And there have also been a lot of advances. Google and Facebook frankly, have done a lot of that in other companies too. Uber as well. A lot of folks are really open sourcing the innovations that they've done with respect to MMM methodologies that have made those methodologies a lot more lean, nimbler. You can update the models more frequently. You can actually feed in some calibration points that let the model operate just in a better way.
And this is all stuff that hasn't really been around in some of the old school MMM models, which are extremely complex. They take a ton of data. The data that they get is delayed oftentimes, and typically they're run once a year, maybe once a quarter and that was considered like a lean one is that would run once a quarter. And this is, I'm talking about more the CPG brands. How you can imagine just the volume of data and how disparate it is. It just takes a while to amass it, let alone to crunch out the numbers. And so it's very retrospective and looks back at performance over the last year, whereas these newer models can be more forward-looking, they can be predictive, they can really help you with more granular optimizations.
And I mentioned the ability to calibrate these models, and that's a relatively new innovation as well. So using more of a Bayesian model, which allows you to, for lack of a better word, peg the outputs of the MMM to ideally a causal experiment. So you've run a ground truth experiment, you've got some degree of causality, you can plug that in depending on how you run that experiment. You could plug that into the MMM and make sure that the MMM is taking that into account when it's generating its outputs. That could be extremely powerful because that means that you don't have to run continuous testing to measure that causality. You can have the MMM refer back to that experimental result and make sure that as the model's running, it's aligning the outputs to what that result could be. And then of course, the idea is you would do more testing, regular testing, and continue to feed that model additional incrementality points and update the ones that you previously had given it. And that way it continues to evolve and stays current with the latest incrementality.
So in my view, this is the best of both worlds where you have the ability to have causal measurement or reasonably close to causal measurement on an ongoing basis through the model. You are still running the experiments, which is the best way to do it, and the results tend to be more actionable. So I'll say there's one other dimension to this, which is if you layer on some post exposure information, again, not at the absolute value level, but in terms of a relative comparison, you can then incorporate that into the MMM plus the experimental design and the ground truth experiments, and you can start to get results that are broken out at a very, very granular level that are at the end of the day rooted in causality. So this is a way to not throw away the baby with the bathwater when it comes to some of those post-exposure metrics, but you're not looking at them in the wrong way. You're actually extracting probably the only piece of utility from them, which is potentially a relative comparison.
I like to call that process triangulation. You're triangulating the ground truth experiment, the MMM, and the attribution data, and putting that together in a unified package which gives you the best of everything. And I would say the best measurement platforms out there are really leaning into this approach and are offering this to their clients. And I really think that 2024 is the year people wake up to that and understand that, hey, it's worth making the shift to this approach. I can't rely on post-exposure exclusively anymore. It's just leading me astray.
Evan Kaeding: Well, and I think importantly, like you mentioned, it's really three different sides of a comprehensive measurement plan essentially. And most importantly, it is durable in a privacy-forward environment. I know when I talk to marketers, they have questions today about, well, what's going to happen when cookies go away in Google Chrome, for example, if that does end up happening. In 2024 I suppose will be wiser maybe next year to see if that ends up being the case. But based on what you said, to a large extent, we're already living in a post-attribution world anyway. Is that a fair assessment?
Andrew Covato: Yeah. Totally. I feel like we have been for quite some time now, to be honest. But yeah, that's the beauty of this triangulation approach is exactly as you said, it's robust against anything else that happens. Say all identifiers, vanish, IP, vanishes, all of this stuff. You're still going to be able to do these types of methodologies. You're still going to be able to apply this triangulation approach and get some reasonably good results out of it. So it's a lot more robust to the privacy changes that are current and possibly other ones that are upcoming.
Evan Kaeding: And Andrew, maybe one last question to wrap things up. I know many marketers will be listening to this thinking about what their next steps are for conducting incrementality tests. If they are doing MMM, how can they start factoring these into their MMM, either predictions or evaluations? What size of brands should be thinking about this? Is this a big company problem with multi-million dollar budgets, or is this a problem for small mom and pop shops as well who are selling online and really need to be thinking about incrementality as they go to market and spend marketing dollars on customer acquisition?
Andrew Covato: Yeah. I really do think all brands should be thinking regardless of size, should be considering incrementality starting at the smallest brands. I think it's not reasonable to say do a geo-holdout if you're say a local flower shop. That's just not reasonable. And so in those cases, you have no choice really, but to rely on post-exposure metrics. The only thing I would take into consideration if you're at that size is look at the outputs with a grain of salt and understand that they're not reflective of your true cost of acquiring a customer. You could possibly experiment with doing on-off type experiments and seeing what happens to your overall sales, but you'll have to be a little bit more hacky if you're really on the smaller side. You're a mid level ... This is actually the sweet spot for a lot of this methodology is that, call it low millions to mid-high tens of millions or maybe a little bit higher. That I think is the area of advertisers that are really most affected by this.
And I find that for whatever reason, that cohort tends to get really wrapped up in last click and platform CPA, just because it's easy. They maybe don't always have the resources to build something else up more. And so they're the ones that I think are paying this tax, this, I call it ad tech tax for acquiring users that they would've acquired anyway. And so if you're in that cohort, I think really see if you can commit to in 2024, doing a test across all of your media. See what is working and what isn't or see rather if your whole media program is working and if it isn't. This is something that Growth By Science, we're here to help with that. Certainly we're not the only ones out there. A lot of folks can go to some of the open source programs that are out there from different platforms that I mentioned, apply some of that.
But I would really strongly recommend that that is something that folks prioritize. And then I think when you're a really, really large brand, typically you're already thinking about this to some extent, although I've been surprised where maybe that isn't always the case, but at that point, you have the luxury of scale where you can be very nuanced with your testing. You're typically aware of the overall incrementality. Hopefully you have run a test like that once or twice, or hopefully it's on your regular roadmap. And you can start to get extremely granular about turning off channels in particular countries and things like that to understand from a very nuanced perspective, what is driving your incrementality? And again, the MMM is like that bridge between the tests that lets you continue to operate and continues to be updated as you're adding more incrementality data points to your data set.
Evan Kaeding: Andrew's call to action for 2024, start turning stuff off and see what happens is what it sounds like. Andrew, anything else that you'd love for our audience to have before we depart here?
Andrew Covato: Yeah. The last thing I will say is the stuff that has worked in the past is the stuff that still works now and it still works now no matter what media platform you're on. When I recommend media plans for clients of all shapes and sizes, they're always based in reach, in frequency and in creative and in first-party targeting. None of this target that requires combining data sets. If you look at those as the cornerstone of your ad program, I think you're already well on the path to incrementality. And as you test and apply some of the techniques we talked about here today, you'll be able to tune those. Stay away from anything that is post-exposure optimized. That tends to not be as incremental as it appears. So test, test, test, test again, and then test once more and you'll be able to see the truth. Take that red pill. Step outside the matrix of the self-fulfilling prophecy.
Evan Kaeding: Thank you, Andrew. Everyone. This has been Andrew and Evan. Thanks a bunch.
Stay in the loop with our newsletter
Be the first to hear about product updates and marketing data tips