How to get started with conversion rate optimization with Todd Chambers

In this episode, we catch up with Todd Chambers, Founder & Director at Upraw Media a SaaS PPC agency, to learn about CRO.

You'll learn

  • What data you should collect to analyze your website conversions
  • How to set up a CRO experiment
  • How to validate your experiments
  • What reports you should build to keep track of your experiments

Subscribe to the Marketing Intelligence Show

Learn from Supermetrics' experts how to use data to fuel growth and maximize the ROI of your marketing spend.
Subscribe to the Marketing Intelligence Show

Transcript

Anna:

Hello Todd, and welcome to the show.

Todd:

Thank you so much for having me. Looking forward to it.

Anna Shutko:

This is awesome to have you here, and we have a very, very interesting topic for our listeners today, which is conversion, result, optimization, and let’s start with the first question. So, what qualitative data do you collect, and how would you analyze this data to optimize the conversion rate on your website?

Todd Chambers:

Yeah. Great question. So, I like the fact that we start with qualitative data because it often gets overlooked. So, for people that are unfamiliar, the qualitative is the non-numerical data. And the aim of qualitative data is to really uncover the why behind the numbers. I mean, it’s great to know that your free trial conversion rate is really, really bad, or your demo conversion rate is really, really poor, but what you want to uncover is the why, the why behind the numbers. So, especially as well, if you’re an early-stage company and you probably don’t have a lot of quantitative data, you probably don’t have a lot of the hard numbers, so qualitative data can really give you those killer insights, and it can have massive, massive leverage. So, let’s just think about what is the goal of the qualitative data first of all, before we think about what you can actually collect.

And the goal really is to get as close to the customers as you possibly can because marketers are really, really bad at not remembering that there are humans behind the numbers. And I find in my experience; most marketers lean on the numbers more than on the qualitative side. So, really, I think the overarching goal when you’re thinking about this is, what are the motivations of my potential customer or customers? What are the pain points? What is the perceived value they would get from a product or a service like yours? What are the potential anxieties, like it’s too expensive, can you guarantee results? Do you operate in my country? All of these things. And then also it’s the voice of the customer. So, in collecting qualitative insights, you can actually steal the words from your customers.

If you can get into the mind of your customer and be involved in the conversation that’s going on in their head, ultimately that will lead to better insights and you’ll be able to create more compelling copy, messaging and a customer experience that speaks to the real needs of your customers. And then lastly, I would say, what’s super important as well to think about the jobs that they’re trying to achieve, it’s the jobs to be done framework, is a good way of thinking about this. So, people aren’t looking for an enterprise collaboration platform with great support. What they really want is better productivity and better culture. So, don’t think about selling features. You’re trying to get under the emotions, the motivations, and the anxieties when thinking about qualitative insights.

And in terms of how you collect that, I think the holy grail is customer interviews, literally picking up the phone, jumping on a video call, and speaking to your customers. And I think that that in and of itself is science. An example of this might be for example, before you partnered with Upraw Media, what did your situation look like? And what was frustrating about it? That’s going to be really, really useful, and they’re going to tell you information like, well, we didn’t really know what we were doing. I’m a CMO. I’m spread really, really thin. I couldn’t give it the attention it needed. We needed some external validation. Well, this type of information is incredibly useful when you’re thinking about your website copy and the whole experience. So, yeah. The holy grail, I think, is customer interviews.

Secondary, I would say, would be email surveys. You don’t want to test the customers, so try and keep them short and sweet. But, yeah. Sending out questions using things like a Typeform, I think, can streamline that process. Website polls, so people know Hotjar, it’s very common, you know when you’re about to leave a site, did you find what you were looking for today? No, I didn’t. Okay. What was missing? It’s incredibly useful. Another good way would be panels, recruiting panels of people. I know you had Peep Laja from CXL and Wynter on the podcast recently. So, actually, you’re recruiting a panel of people that are in your ideal customer profile, and asking them for real feedback. Look at our website, did you understand it? Did you know what to do next? And so on. That’s incredibly powerful. Usability testing, screen recordings. So, again, using Hotjar, watching how people interact with your website. Once you’ve watched 20 people navigate, 30 people navigate through your website, you get a good indication of what types of content is resonating with people, where do they navigate, where do they go?

And then I would say, lastly would maybe be something like a preference test, where you show a panel of people a certain type of design or landing page for a few seconds, and you ask them which they preferred. So, ultimately what you want to do with the qualitative insights is you’re looking for signals or patterns of behavior, and these things will just keep coming up over and over again. So, you also asked how you can analyze the data. Well, yeah. It will be quite common, what the common threads are, but we use Excel or a Google Sheet, and you can literally just think of it like this. One column is what exactly did the customer say? Another column would be, was it a motivation, anxiety, or a pain point?

And then you can categorize those into themes. It might be like it was too expensive, whatever those things are. And then you can just use a simple pivot table, and you can just rank them. What are the most common things that keep coming up over and over again? So, qualitative is less of the scientific side than quantitative, but super, super important, and really, really powerful when it comes to conversion rate optimization.

Anna Shutko:

I really love the focus on the customer there, and also this structured approach with spreadsheets and all the data organization you are using. And now let’s talk about the quantitative data a bit more. So, first of all, what is that quantitative data you should collect and analyze, and how can you use it together with the qualitative data you’ve just described?

Todd Chambers:

Yeah. Great question. So, the quantitative data is numerical. It’s the numbers. And what you’re really trying to uncover there is what is happening, where is it happening? When is it happening, and for from who? So, typically this type of information is found in Google analytics and your CRM. And really the goal of the quantitative is you’re trying to really measure the customer journey, so you can uncover the bottlenecks. Okay. So, my demo conversion rate is bad, but at which step in that journey is the biggest drop-offs? But it’s also trying to measure the positive moments in the customer journey. Where are people interacting with my content? And so on and so forth. And those customer journeys can be simple, and they can be complex, and maybe if I give you a simple example.

So, an e-commerce example might be, somebody clicks on a Google shopping ad. They land on a category page. They browse a few products. They find one they like, they add it to their cart and they go through the checkout process and they buy it. That’s relatively simple. So, you need to measure all of those interactions along the way. And your Google analytic does a pretty good job of that. And then a more complex way would be maybe, like a SaaS example, where the user comes from a paid ad, they navigate through the site, they watch a video, they look at the pricing page, they sign up for a free trial. You send them an email, and you say you need to verify your account, so you’re taking them off-site. Then once they’ve verified their account, maybe they use the tool there, and then, maybe they don’t. They’ve got 14 days to use it. Maybe that user never comes back again. Maybe they come back 16 times.

So, the customer journey can be very, very different. It can be simple, and it can be complex. But I think as a baseline, really what you’re trying to… The baseline metrics you want to track would be the simple things like the number of users, page views per session, the bounce rate, but critically it’s the conversion rate, the source, the medium, like where are these conversions actually coming from? And then, it would be making sure that information is sent into your CRM, closing the offline and the online gap. For example, lead generation is super important. If you’re selling an enterprise solution, you’re probably going to have a phone call with that person. Maybe there are multiple revisions of contracts and all of those things.

But making sure you have those baseline conversion rates, source media, information is sentenced to the CRM. You’d be surprised, actually. Lots of companies don’t do this well, but if you want to take that much further, and this is really what I recommend, is getting those micro-level interactions. So, really you’re trying to measure as much as you possibly can. So, these can be things as simple as button clicks, whether they’re both internal and external. How long are people watching my video for? You may want me to give you a really simple example. We had a client recently and they had a video. It was right in the [inaudible 00:10:25] section of their site, and we had a tracking to see how long people are actually watching that video. And we uncovered that 80% of people dropped off in the first 10 seconds.

So, that’s incredibly powerful, right? We need to change the video. We need to remove it maybe. We need to try a different script. Another example we had was with an enterprise client that had a demo. So, you click on get a demo on the site, then you’d be taken to another page. Okay, this is what the demo is. This is what you can expect, and it gives your email. They give the email; then they go to another step. Okay, please give me your title and so on and so forth. If you’re not tracking all of those micro-level interactions, and I always say, the devil is in the detail, you’re going to have gaps in your knowledge when it comes to thinking about designing CRO experiments and everything else. So, yeah. That’s just a general how I think about quantitative tracking.

Anna Shutko:

That sounds awesome, and I really love how you have broken it down very specifically with a few examples. Now that we have insights into qualitative data and into quantitative data, how do you go about designing the actual experiments? So, you mentioned they’re dependent on two things, the amount of data you have, and the smaller companies might not have as much quantitative data. Still, then they would use the qualitative data and what kind of scenarios there would be, depending on the data amounts. You could also provide a few examples there.

Todd Chambers:

Yeah. 100%. So, I think you nailed it. I mean, the data is a key consideration. I think there’s a bit of a misconception in the marketing world that we should be testing everything. We should be A/B testing. I mean, just to give the listeners here some context. I think it is best practice, really, when you’re thinking about A/B testing to have around 1000 conversions. So, that’s actually a lot. For you, for the majority of companies, especially if your goal is to get demos, for the vast majority of people, you’re probably not getting a thousand demos a month. Or maybe even 1000 leads or 1000 free signups, whatever it is for stats. So, you can’t always be data-driven. Slack can be data-driven. Uber can be data-driven because they have so much data, they have data warehousing, they have data scientists.

So, for the vast majority of us, I think the best advice is you need to be data-informed. And maybe I could give you a couple of scenarios like you asked. So, Slack, for example. Let’s say, and I’m making these numbers up. They get 1000 signups today, 30,000 signups per month. Well, for those guys, through an A/B test, they can get to statistical significance really, really fast. So, that means they can do a lot of testing, and they can test small things. So, they may say like, “Hey, I wonder if we have the testimonials, the logo bar in [inaudible 00:13:10] section of our site. Wonder if we just change one of those logos to something else, I wonder if we change this button CTA to something else.” Well, they can do that because they have the data.

But for the vast majority of us, let’s take another extreme at the other end. Let’s say you’re a SaaS company and you’re selling to the enterprise, and you just launched, and your main goal is you want people to book a demo. A demo is a high friction, it’s a big ask, and you might only get 50 demos a month. You might, in the beginning, only get one a week. So, you can’t do A/B testing. And in those situations, when you’re thinking about designing experiments in that case we don’t have data, you need to be designing, what I would call, a moonshot experiment, which is combining multiple insights and ideas, and really, really trying to push the needle. So, if your conversion rate is, in the SAS example, is 1% and you; probably be trying to shoot to get it to 2%.

And the way that you would do that would be with a sequential test. So, you can’t do A/B testing, so a sequential test is just for two weeks, we’re going to collect data on the original, and then we’ll do the change, and we’ll do that for another two weeks and we’ll compare side by side. And of course, it’s not usually scientific because of the seasonality. There are things that are outside of your control, but generally speaking, that’s how I think about it. And some of the most impactful experiments I’ve seen in B2B SAS, which is in our wheelhouse at Upraw Media, where I would focus my experimentation when you don’t have tons of data, are things that are above the fold. So, it’s the first thing users see when they hit your site. So, you need to really, really open strong. You really need to stand out because, basically, the goal is to get them inquisitive and get them to continue reading and consuming your content.

So, you need to stand out. You need to be different. You need to be very, very explicit and clear. People should know what you do, exactly who it’s for, what you want them to do next. And is this believable? Are they credible? Because bold claim proves absolutely nothing. You have to prove it. So, just saying we’re the world’s best PPC agency for SaaS is just meaningless. You have to prove it. And then another thing I would say would be having a very solid call to action. Users need to know exactly what they should be doing next, and making sure there’s as little friction as possible. So, I think, generally speaking, those moonshot big wins come from things that are above the fold.

Anna Shutko:

Great. I really love how you’ve broken it down into many different types of experiments. I actually didn’t know about the sequential ones and the moonshot ones, so it’s great that you shared about it. And so, you’ve mentioned, there are a couple of things you can do on your website. You can improve the copy and add a strong CTA, and then my next question would be, how will you actually measure all these actions that come as a part of all these experiments? So, you previously mentioned that there are two different ranking systems for on-page CRO and off-site CRO. How are these different from each other and what does each system consist of?

Todd Chambers:

Yeah. Great question. So, if you think about it at this point, let’s imagine that you’ve collected all of your qualitative data. You really understand the why, the emotions, the pain points of your customer. You’re tracking all that micro-level quantitative insights. If you spend some time making sure you have that measurement in place, and you really dig deep and try to find insights, you’ll probably have a ton of ideas. Together you’ll say, “Hey. The demo conversion rate is really, really bad and we’ve uncovered on the second step we get an 80% drop-off. Great. Well, I have an idea. Why don’t we improve the copy on that second step?” So, really the idea of having a ranking system is that you put all of your ideas together, an idea you’ll bring in multiple people in the team, and you just basically list them.

And instead of the highest person in the room basically saying, “Hey, I think we should do this.” A ranking system is a way of prioritizing, using a little bit more of a pragmatic and a scientific approach. So, a common ranking system is called ICE, which stands for impact, confidence, and ease. So, the person who puts the idea in the spreadsheet basically comes up with a hypothesis, like this is what I think will happen and why. So, the person in the meeting would explain the experiment. Then everybody around the room has to give their two cents on what they think the impact is, how confident they think this experiment will win, and how easy it is to implement? But the problem with this is it’s quite subjective. Because if you’re not a developer and say the experiment requires a developer, that person might not know how easy it is to implement.

So, we then found two other ranking systems. So, one of them is called BRAS, which is for your offsite. So, we’ve broken the two ranking systems down into ones that happen offsite, that might be an email campaign, it might be Google ads or Facebook. It could be cold email outreach, whatever, and then the on-site ones. These are the website changes. And BRAS, to give you a quick overview, would be blink, so like don’t overthink this, just in a blink, how impactful you think this experiment will be. Then relevance, so try to ask yourself, is this channel really where my target audience hangs out? The availability, so essentially how easy is it to execute and the cost of execution and how scalable it is. So, that’s what we use for the off-page. And once everything has a score, you just sum up the scores, and then you know the one with the highest score is the one that you should start.

And then the next one we use is called PXL, and that’s for the on-page. And literally took this from Peep Laja again from CXL. So, you can check that out, but let me give you an example of this, probably way too many to list here, but it’s a much less subjective way of ranking. So, one of the examples is, is the change you are going to make above the fold on the website? If it is, it gets one point. Is the experiment noticeable within five seconds? Yes. You get two points. Are you adding or removing an element? Yes, it gets two points. And there are, I think, around 10 or 11 different questions there. And at the end of it, again, you end up with a rank and then that’s basically where you should prioritize your experiments.

Anna Shutko:

I really love this scientific approach, and I also think it’s very, very important to do it in a scientific way because I agree, key members can become subjective so these help move quickly and at the same time provide insight into, okay, what should we focus on first? What should we focus on next? Without adding any biases. And now, can we talk a bit more about the common mistakes when it comes to CRO? Because I’m sure even with this fantastic framework, some marketers should still pay attention to certain things which can go wrong.

Todd Chambers:

Yeah. 100%. And these are some mistakes that I’ve personally made and I’ve observed, so happy to share. I think one I already alluded to earlier was, thinking you can A/B test everything. We need to be data-driven. For the vast majority of people, that just isn’t the case. I think being data-informed is a much better way to think about it. Another one would be just not using research. This, I think, is crazy. I mean, you really should be collecting as much research as possible, both qualitative and quantitative. Often, when you first start out with CRO experiments, you’re just making assumptions. So, context is absolutely everything. So, make sure you’re diligent when you collect all that information because when you have the right research, you have the right insights, and then you’ll be able to design and build much better experiments.

Another one I would say is overlooking messaging and copy. The copy on the page is so important. I can’t stress that enough, the positioning of the messaging. People will spend a lot of time on design, but we already know science shows, and data shows, and research shows that the copy, the words on the page, are so, so incredibly important. So, definitely don’t overlook the messaging. Another one would be not designing the experiment upfront. So, the sheet we use, whenever an idea goes into the sheet, you really have to think it through. You have to design it properly. So, what is the metric that we’re trying to track here? What percentage increase are we trying to shoot for? What other supporting metrics could we look at? If this doesn’t move the needle what are the secondary ones that maybe we could monitor as well? So, diligent in making sure that you design things upfront and you have the measuring and the tracking in place.

Todd Chambers:

And then lastly, I would say, maybe this is the most important one actually, but not zooming out and understanding where the biggest opportunities are. Maybe I can give a simple example. So, let’s take a SaaS company again. Let’s imagine that you have a 10% conversion rate to free trial signup, but only 5% of those pretrials are actually activating and turning into paying customers. Well, if you have limited resources, it probably doesn’t make sense to spend all of your effort on the landing page, on the acquisition and trying to get that 10% up to 11%. You’d have much better gains and overall revenue gains and growth if you focused on the activation rate. So, looking, zooming out, measuring the entire customer journey and thinking of it like a leaky bucket. Where are the biggest leaks in my customer journey? And that will help you know where to focus.

Anna Shutko:

I love how you mentioned the copy. I definitely agree that the copy is super important, and it definitely gets overlooked a lot. Shout out to our amazing content managers right there. And now if you could share a bit more about the reporting side, that would be amazing, because now marketers have all these data, whether it’s a lot of data, like CRM or other companies have, or whether it’s even a small amount of video, but how should they summarize it all? So, what kind of reports do you have to have in place to track your progress successfully? What are the metrics you should focus on? Maybe you could also mention how marketers can analyze these reports to arrive to option points.

Todd Chambers:

Yeah. So, there’s not a one-size-fits-all report, of course. I mean, it depends on the type of experiment. If you’re running an email campaign and you want more people to open up your welcome email series, then maybe you want to test the subject line. So, the metrics for that would be very different to we’re trying to push the needle on our Facebook ads, or whether you’re trying to improve the conversion rate of your demo steps. So, the metrics do really, really vary depending on the type of experiment you’ve designed. But what I would say is, it’s really just diligently tracking everything, and we have it in Google Sheets, and maybe you have more complex reports and data that you’re looking at for that specific experiment. But I think generally speaking, as long as you design all those experiments upfront, you have them in one place and you have regular meetings.

So, for example, for Upraw Media growth, not for our clients, we have a bi-weekly meeting, so we’ll join that meeting, we’ll go in, we’ll rank the new ideas and we know what we need to launch. And then we’ll go back in and we’ll check on periodically the progress of the existing experiments. So, I think one of the key things here is making sure you set a timeframe for the length of the experiment and set an end date. So, when you have those meetings you’re like, “Okay. Which experiments have finished now?” And once they finish, you need to analyze the results. So, it’s like making sure, if you fail, why did you fail? If you win, why did you fail? And documenting those learnings, because in the beginning, you will definitely fail far more than you will win.

So, the game is really trying to ramp up the velocity of the experiments, so you can learn faster. So, yeah. Really there is none-size-fits-all in terms of what metrics you should track. It’s just making sure. The best way I always say is to make sure you always have the right plumbing in place. So, it’s having those micro-level interactions, macro-level conversions, diligently having it in a spreadsheet, making sure you have a hypothesis up front, and then revisiting and really digging in, learning, and just rinse and repeat until you get better at the process.

Anna Shutko:

Todd, thank you so much for sharing all the useful insights and I really love the frameworks. My table is now covered in notes, so if the audience would love to learn more about you, and I’m sure they would love to, where can they find you?

Todd Chambers:

Yeah. So, they can go to uprawmedia.com, or they can contact me on LinkedIn. Todd Chambers. I don’t think there are too many Todd Chambers in the world.

Anna Shutko:

Awesome. And do you want to mention a couple of words about your podcast?

Todd Chambers:

Yeah, sure. So, I also host The Masters of SaaS Podcast, so if you’re in a SaaS industry, we basically interview top performers from the SaaS industry. So, Mikael, the CEO of Supermetrics, for example, we have on. Sometimes we talk about founder stories, the story of how they started the company, and the challenges and funny stories and learnings. And then sometimes we do more deep tactical case studies, where we dig in on a topic like CRO or on messaging. So, yeah. We’d love to get any feedback on the podcast also.

Anna Shutko:

Yes. Everyone feel free to check out Todd’s podcast. I really enjoyed the episode with Mikael. Todd, thank you so much for coming to the show today.

Todd Chambers:

Yeah. Thank you so much for having me. Pleasure and speak soon.

Turn your marketing data into opportunity

We streamline your marketing data so you can focus on the insights.

Book Demo