Oct 8, 2020

How to run marketing experiments: practical lessons from four marketing leaders

14-MINUTE READ | By Pinja Virtanen

Marketing measurement

[ Updated Oct 8, 2020 ]

“Marketing is one big experiment. Some experiments just have a longer shelf life than others.”

That’s the first thing Andy Culligan, CMO of Leadfeeder, said to me when I asked him about experimentation in marketing.

And to help you understand how to run those experiments, I interviewed Andy and three other seasoned marketing leaders. 

After reading this article, you’ll know exactly:

Ready? Let’s go!

Should more marketers run experiments? And why?

It’ll probably come as no surprise to you that when asked whether more marketers should run experiments, all four of our experts came back with some variation of “hell yes”.

But why?

Mari Luukkainen, Head of Growth at Icebreaker.vc, explains that since she has a background in affiliate marketing, a data-driven iteration with the goal of business growth is simply the only type of marketing that makes sense to her.

She says, “To figure out what works and what doesn’t to grow your business, you need experimentation. There’s no point for a marketing team or even a business function that isn’t running experiments to find a better, faster, or a more optimized way to grow their area of the business.”

Both Michael Hanson, Founder & Sales Consultant at Growth Genie, and Andy from Leadfeeder bank on experimentation because the market is a moving target.

Michael explains, “If you don’t run tests in marketing, you’re always going to fail. What worked last year, won’t necessarily work so well this year. Take organic Facebook for example. 10 years ago, you could get good reach just by posting from your company account. But it doesn’t work like that anymore. And so if you don’t constantly measure performance and try to improve, you’re definitely going to fail.”

Andy adds, “Everything you do in marketing is a test anyway. Some tests work longer than others but the point is, you need to help your marketing evolve.”

And finally Mikko Piippo, Founder & Digital Analytics & Optimization Consultant at Hopkins, argues that experiments are great for reducing bias.

Mikko says, “Everyone has an opinion, and sometimes expert opinion is not worth much more than tossing a coin. Experiments are the best way to systematically create new knowledge, to learn more about your audience and your customers. They force you to question your own ideas, beliefs, and the best practices you’ve read so much about. This can be somewhat uncomfortable if the data doesn’t support your own ideas.”

And now that the jury has reached a unanimous verdict, let’s move on to the next big question, i.e. what’s stopping marketers from experimenting.

So why isn’t everyone and their grandma already running experiments?

According to Mari, the biggest problem isn’t that marketers don’t want to experiment, it’s that they’re working towards the wrong goals, lacking the routine, and/or afraid of failing.

She explains, “Far too many marketing teams are still struggling to set goals that directly correlate with business performance. But as soon as you set goals that make sense for the business, you’ll start systematically working towards them. And that’s when you’ll need to start experimenting.”

Working with larger corporations, Mari has also noticed that sometimes the company culture works against a fundamental part of experimentation: failure.

Mari says, “The other big issue can be that marketers are so afraid of failing that they won’t feel comfortable trying anything new. In some corporations, failing means that you’ll get fired. That’s when there’s no incentive for marketers to run experiments.”

If you recognize any of these problems in your organization, Mari offers three alternatives to overcoming the issues:

  • Get buy-in for experimentation from as high up the organizational ladder as possible (investors, the board of directors, or the management team)
  • Start small and prove the value of experimentation with small wins (the problem with this approach is that it can be painfully slow)
  • Replace your team with people who know exactly how to run experiments (this is quick but can be a painful process)

And now, with any possible obstacles out of the way, let’s look at what a good experiment looks like.

The 3 things all good marketing experiments have in common

According to our four experts, all marketing experiments should have these three things in common.

1. They’re systematic and measured with data

The first rule of experimentation is that you have to stick to a process and make sure to use data to determine how successful the experiment was.

Mari says, “All good experiments are systematic and measured with data.”

Mikko follows with, “Good experiments follow a plan or a process. In a bad experiment, for example, a marketer would set a goal only after seeing the metrics.”

To summarize, a systematic process and a healthy relationship with data are what ultimately make or break an experiment.

Like Michael says, “If you’re going to test something, you need to measure its success. Otherwise it’s not really an experiment, is it?”

2. They’re big enough (but not too big)

The other, perhaps a more controversial, requirement for a good experiment is that it’s big enough. In other words, yes, you should absolutely forget about the button color A/B tests of yesteryear.

Mikko says, “Be bold. Test complete landing page redesigns instead of button colors, experiment with product pricing instead of call-to-action microcopy, experiment with different automated advertising strategies instead of tweaking single ads, experiment with budget allocation over different advertising platforms instead of micromanaging individual platforms.”

Because the problem with small tests is that even when they’re successful, they’ll yield small results.

Andy explains, “If you only test one small thing at a time, you’re never going to get big enough of an uplift. So if you do run tests, you need to try something completely different. Throw the existing landing page out the window and try something new instead. If the new version works better, then use that as your benchmark going forward.”

Michael says, “Obviously you don’t want to change the company name or logo every 5 minutes. But beyond that, you have to be flexible with the scope of the experiment.” 

He continues, “I had an ex-colleague who had all these wacky ideas and when we tried them, they always came through. The point is, even though ideally you’d want to test one variable at a time, you also have to realize that the impact of a small change will be small. And if you want quick results, you have to think bigger. So I’m all for experimenting with wacky ideas.”

And with that, the verdict is in: think big when you’re experimenting.

3. They’re run as split tests

The final precondition for a good experiment is that it’s run as a split test. This means that you’re testing one variable against a different one.

Mikko explains, “Good marketing tests are usually split tests. You split the audience (website visitors, advertising audience) into two or more groups. Then you offer different treatments to different groups — and you keep some percentage of the audience separately as a control group. This way, you can really compare the effectiveness of different treatments.”

Mikko also emphasizes that even if you don’t have a ton of media budget or website traffic, you can still run experiments. “With low website traffic, the methods just aren’t as scientific as with high traffic.”

The point is: don’t let external variables like low traffic stop you from experimenting.

Bonus: They may or may not have a hypothesis — depending on who you ask

As the bonus criteria for running good experiments, let’s look at the one word that always comes up when we’re talking about experiments: hypothesis.

So, do you need one?

Mikko and his team at Hopkins, for one, are strong believers in setting a hypothesis before running an experiment.

Mikko says, “Good marketing experiments start from a hypothesis you try to validate or refute. Actually, without a hypothesis I wouldn’t even call something an experiment. For example, it’s easy to add a couple of more ad versions to an ad set or ad group. Most marketers don’t follow any logic here, they just add some random ad versions. Doing this might improve the results, but they wouldn’t know why.”

He continues, “A hypothesis forces you to think about the experiment: Why do I expect something to change for the better if I change something else? Why would people behave differently?”

Andy, on the other hand, would go easy on the hypothesis setting. He explains, “In my opinion, really analytical marketers like to make experimentation into rocket science and it doesn’t have to be that. I’m data-driven but purely from a revenue perspective. I don’t tend to get too deep into the grass, the weeds, and the bushes. You’re only going to end up in a rabbit hole. If it’s working, it’s working — and that’s all I care about.”

And that’s why, rather than spending a lot of time forming hypotheses, Andy likes to tie the Leadfeeder marketing team’s experiments into their quarterly OKRs. 

For example, if a key result is to increase Leadfeeder’s tracker install rate by 10%, the team will simply come up with a number of changes to get there.

To conclude, whether or not you should set a hypothesis for your experiment depends on this question: will you benefit from knowing the contribution of each individual change?

If the answer is no, you’re in team Andy.

And if the answer is yes, well… Welcome to team Mikko.

And now that we got that out of our systems, let’s look at the steps you need to take to actually run an experiment.

How to run a marketing experiment: step-by-step instructions

Even though there are clearly some things our expert panelists disagree about, the actual experimentation process all four of them follow is pretty uniform.

Step 1: Start by setting (or checking) your goal

The very first step in the experimentation process comes down to understanding what KPI you’re trying to influence. 

For example, if like Andy’s team at Leadfeeder you’re using OKRs, you can use your key results as the goals for your experiments.

So for him, a goal would look something like “increase our tracker installation rate by 10% within the next 3 months.”

Like Andy, you’ll want to make your goal unambiguous and give it a clear timeframe.

Step 2: Analyze historical data

Once you understand what needle you’re trying to move, it’s time to analyze your existing data. Mari suggests that at this point, you “analyze where you are and how you got there”. 

Similarly, Mikko says that for his team, this step involves, “looking at existing data from our ad platforms and web analytics tools.”

Step 3: Come up with ideas

Equipped with your analysis of historical performance, you can probably list a dozen (or more) things that may or may not influence the metric you’re trying to influence.

At this point, your only job is to list those ideas down.

Step 4: Prioritize your ideas

Mari suggests that you prioritize the ideas you came up with based on “resources efficiency, success probability, and scalability.”

Alternatively, you can use a scoring system like ICE, which stands for impact, confidence, and ease.

After this step, you should have a clear idea of which experiments to go after.

Step 5: Run the experiment(s)

Now this one’s a bit of a no-brainer. Now that you know the expected impact of your experiments, it’s time to run the one(s) you think will have the biggest impact.

But can you run multiple experiments at once? Yes and no.

I’ll refer you back to the discussion we had earlier about hypotheses: if you absolutely need to know which tactic was the most successful, you should isolate your experiments and run one at a time.

If, on the other hand, you’re just trying to move the needle as quickly as possible and don’t really care about figuring out correlation and causation, feel free to run multiple experiments at the same time.

Step 6: Measure success

Whether it’s while your experiment is still running or after it has ended, it’s time to look at data. Did the experiment drive the expected results?

Is there anything you can do to optimize it (if it’s still running)?

Psst! This is where Supermetrics for Google Sheets comes in handy: you can automate data refreshes and email alerts to cut down the time you would otherwise have to spend on manually collecting data from multiple platforms.

Step 7: Rinse and repeat

Depending on the scope and results of your experiment, you might want to start from the very beginning, or simply go back to Step 4 and choose new experiments to run off the back of your results.

And finally, if you need any inspiration for your upcoming experiments, keep reading. Because in the next section Mari, Michael, and Andy will spill awesome examples of successful experiments they’ve run.

4 examples of successful marketing experiments

To get you in the mood for planning your own experiments, here are quick examples from our experts.

Freska: experimenting both offline and online

When I asked Mari about the most memorable experiments she ran at Freska, a modern home cleaning service, she came back with two examples.

Mari starts from an offline experiment, “At Freska our hypothesis was that people who have expensive hobbies that consume lots of time would buy home cleaning services. We tested this by going to a boat expo instead of the usual “baby expos” cleaning companies go to. And we ended up getting surprisingly good results.”

The second experiment that has stayed with her was of the online variety. 

Mari says, “At Freska, our original hypothesis was that people are afraid of seeing the price of home cleaning services in ads because it feels kind of expensive. But when we conducted a deeper analysis with offline surveys, people actually thought that home cleaning services are way more expensive (over 1000 €/month) and our price is actually a pleasant surprise (150 €/month for a biweekly 2-3-room apartment). So we tested showing the price in ads and it actually increased conversions.”

Growth Genie: building the perfect outbound cadence one iteration at a time

Michael’s favorite experiment to date is of a more persistent kind. Over time, his team has perfected the art and science of cold outreach, one iteration at a time.

Michael says, “It’s all about cadences here at Growth Genie; how many calls, how many emails, and how many LinkedIn messages does it take — and in which order — to book a call with a prospect who’s never heard of you before.”

He continues, “We’ve learned what works simply by experimenting and iterating. For example, instead of asking for a meeting in the first few touchpoints, we quickly noticed that we can get much better results by giving away these valuable content snippets and only then asking for a meeting.”

Psst! If you’re interested in absorbing the TL;DR version of Michael’s learnings, check out his recent LinkedIn post below.

And if you want to swipe Growth Genie’s “ultimate outbound sales cadence”, you can access it here.

Leadfeeder: pivoting lead generation for the “new normal”

When COVID-19 hit Europe and the US in March 2020, the Leadfeeder team needed to quickly cut their marketing budget by a third to increase runway.

For Andy, that meant figuring out a channel that would quickly generate pipeline without taking a ton of time or budget upfront.

Andy says, “We started pushing up webinars and those exploded. We quickly got 11,000 leads by only spending something like $1,000. But as everyone started doing webinars, the numbers began to drop. And that’s why we decided to start recycling the webinars into short 10-15 minute videos, rebranded as “The B2B Rebellion” series on YouTube.” 

He continues, “The webinars were a test and because they were working, we doubled down, and eventually moved into the video concept. The video concept has been working nicely, and now we’re experimenting with new speakers and distribution channels.”

Overall, this constantly evolving experiment has allowed the Leadfeeder team to maintain their pre-pandemic lead volume at a third of the cost.

Andy says, “Without constant experimentation, you’re not going to win and your marketing will go stale.”

Over to you! ?

What are some of the most successful (or surprising) marketing experiments you’ve run? 

Let me know on Twitter or LinkedIn!

Turn your marketing data into opportunity

We streamline your marketing data so you can focus on the insights.

Book Demo