Marketing has been around for quite some time, but over the last couple of decades, it has been completely revolutionized. There’s a lot less guess work, and we can much more easily discern which marketing tactics are successful and which ones flop.
These days, every marketing operation has to be able to attribute its efforts to results. What this means is more accountability for marketing teams. Not enough leads created? We’ve got a problem. Contact list too small? Problem.
The good news is, while marketers are being held more accountable, they now have better ways to measure which tactics they use work and which don’t. In the past, the answer to “why aren’t we generating enough leads?” may have been “I don’t really know.” Now the reason behind lackluster results can often be pinpointed.
One of the ways for marketers to best optimize their work is with A/B testing.
Try it now, for free
What is A/B testing: A high-level overview
A/B testing is a simple concept. Sometimes called “split testing,” it’s the practice of putting out two versions of a user interface—like a web page or product design—to see which one performs better.
Half of the population will see one version and the other half will see the other. You can then use the data from all those visits to determine which version of your UI performs better.
What to test
One of the tough parts when setting out to A/B test is determining what you should test. There are tons of interfaces you can test, and within those interfaces, there are countless variations you can try. So, when starting out, what do you test?
Establish goals
In order to make sure you’re testing something worthwhile, you need to figure out what it is you want to accomplish with your split tests. Before you test anything, you need to be able to answer the question “why are we testing this specific component versus another?”
So, your tests should be inspired by high-level goals within your organization. If you’re a marketer looking to build up your company’s contact lists, test different versions of a email sign up form. Building a robust contact list folds into marketing’s higher-level goal of generating high qualified leads.
It’s crucial that your tests have some sort of hard number behind them for evaluation. If you run a test and then base your decision off personal preference, you’re doing something wrong. The results of your test should be backed by hard data and, assuming more than marginal differences, should do most of the decision-making for you.
This table from Dan Siroker and Pete Koomen’s book A/B Testing: The Most Powerful Way to Turn Clicks into Customers serves as a nice guide for what your goals might be depending on the type of site you have.
How to prioritize
You might be thinking: “we have so many goals, you haven’t really narrowed it down much.” That is a valid point.
It’s smart to prioritize based on ROI. Testing high-ROI factors will, obviously, provide you the best return on your testing and serve as a justification for further testing on other factors.
For example, if you run a blog, two of your success metrics might be total page views and free trial sign-ups. Now, page views are great, but there’s not a direct ROI from a bump in page views (unless you advertise on your blog or are an ambassador for products).
There is a more direct tie to revenue in trial signups than page views. So a successful test on your trial button will be more valuable than a successful test on, say, different variations of a headline to a blog post.
You’ll want to do the more impactful test first, not only because it’s best for your business, but also, you can use this test as a proof of concept if you have any people within your organization who question the value of A/B testing.
Avoid the HiPPO syndrome
It’s crucial you determine your goals and how to prioritize them before you get started. The reason being: people don’t really know what design is going to be effective, and if you don’t have goals established, you are risking too much subjectivity in your decision-making process.
The term HiPPO stands for: Highest Paid Person’s Opinion. And as an organization, you want to be very wary of it. Organizations that lack objectivity in their testing often default to going with the HiPPO, which is not a good strategy.
However, if you have clear goals that are based on hard metrics before beginning, you’ll have a much easier time staying objective. It’s hard for anybody, no matter how much money they make, to overrule clear numbers (though I’m sure someone can tell me a story about that one boss).
Principles for A/B testing
Avoid refinement too early
When A/B testing, you are eventually going to get to the point of refining a design to the very best option, but you need to make sure not to do this too soon. Rather than refining what you already have when you get going, think big and experiment with a broad range of ideas.
Let’s say you have a web page laid out one way. If you were to start by refining it, you’d eventually get to the best version of that layout. Good right? Wrong.
There’s a distinct possibility that the page’s layout, even at its best, didn’t have the potential of another version of that page with a completely different layout.
When Isaac Newton was a child, he jousted. Imagine if he spent his whole life refining his jousting ability, never trying out different activities. He might’ve eventually become a pretty good jouster, maybe even great.
Thankfully, he didn’t spend his whole life with his head buried in a jousting book. Instead, he tried some wildly different activities, like Mathematics, and became known as one of the greatest minds to ever live.
Okay, I don’t know if Newton ever jousted. I made that part up, but to illustrate a point: If you start refining too early, you might miss out on a high-potential option because you didn’t spend enough time trying radically different things at the start.
Not always about addition
When coming up with different variations to test, you may be tempted to add, add, add. This is not always the best idea. Simplicity goes a long way, and by making additions, you might just be creating complexity.
If you’re coming up with variations, think “what can I subtract from the original?” before you get to “what can I add?”
In A/B Testing: The Most Powerful Way to Turn Clicks into Customers, Siroker and Koomen outline a case study perfectly illustrating this concept. They tell the story of a test run by Cost Plus World Market in which the retailer hid the promo code and shipping options form fields from the last page in the checkout funnel. By hiding those fields and turning them into expandable links, they saw a 15.6% increase in revenue per visitor. Conversions went up by 5.2% as well. Those are some big numbers.
Here’s the image of the test included in the book. It’s amazing how a little change can have such an impact.
Failure is not always a bad thing
When you’re split testing, every failure needs to inform you of something. With rose-colored glasses, you might get the idea that every variation is going to result in a good outcome. The truth is, a lot of them don’t. When they don’t, you shouldn’t pack your bags up and head home, but rather dig in and look for answers.
Let’s say you run a test altering your website home page. Your primary goal with this change is to up conversions from first-time visitors. It doesn’t take long for you to see that the proposed change isn’t working. Conversions are down on the new homepage.
However, upon some digging, you find out that returning visitors are staying on your website longer. Something about the new UI is having this effect.
Now you’ve learned that first-time visitors and new users interact with your site differently, and you have the tools to maximize the experience for both segments. Sure, this test didn’t bring about the desired results, but it wasn’t fruitless.
However, not all tests are going to provide such an obvious silver lining. Let’s say after digging in, you’ve found nothing about the new design that’s working. You might have to scrap the new design altogether, but use the experience as a justification for further testing.
And, when you conduct further testing, use past results to inform those tests. You can always use lessons learned from previous tests to generate hypotheses for future tests.
Without the benefit of testing, your organization may have made the switch without knowing what the results would have been. That would’ve been a killer for business.
How to A/B test
So you know some split-testing rules to live by, and you have a testing plan.
But what about the logistics? How do you actually do it?
Well, there are several ways. You can create your own tool, you can purchase one from a software vendor, or you can hire an agency that takes care of everything for you.
Let’s go over those three options and see which one is best for you.
Create your own
The truth is, I can’t be of much help here. Creating your own A/B testing tool is an engineering-intensive endeavor. So, if you are considering this option, here’s step one: go to your head of engineering and ask how plausible it would be to build an A/B testing tool in house.
Don’t have an engineering team? Then we’re done here. Either buy a tool or hire an agency.
Purchase a tool
There is no shortage of A/B testing tools available for you to purchase. These tools will vary in usability, but even if you’re not a whiz, you can probably find something simple enough to use.
There are two main questions that you should ask yourself before buying a tool.
Does it work well with the software you already use?
You need to make sure the tool you purchase will integrate with your content management system, any other analytics tools you use, your e-commerce platform, and really anything else you think might be relevant.
An A/B testing tool doesn’t stand alone, so do your research before making a decision.
Does it fit in your budget?
This one is kind of obvious, but it needs asking. Tools will vary in their cost, so you need to make sure the one you get is worth it. Just as you prioritize what to test by ROI, this decision should also be based on ROI. Given the features offered, your implementation of them, and the cost of the tool, will the benefit outweigh the cost?
You’d be surprised at how little that cost is, by the way. It’s not hard to get your hands on a tool with A/B testing functionality for cheap. We offer split testing in email campaigns that's incredibly easy to use.
Hire an agency
This is probably the easiest way to go about split testing, but it also comes with risk. When you hire somebody to take the reins, you are, more or less, surrendering them. You’re going to need to ask yourself the same questions regarding budget as you would if you were buying a tool, but this method also means asking a lot of questions about the provider you plan to hire.
Here are some good questions to ask before making a decision:
- How much oversight will you have during the process?
- What is the agency’s track record?
- Do they have proof of success in past conversion optimization projects?
- What kind of reporting and analytics do they provide after the test is complete?
Communication
This might not be filed under “logistical,” but communication is crucial when split testing for two reasons.
Reason #1: It keeps you organized
Depending on the size of your organization you may be testing several things at once. And it’s possible that some of these things might have an impact on each other.
For example, let’s say you’re testing your email sign up form on your homepage and you’re also testing a different meta description for your homepage. You might see a boost in email subscriptions, but that could be a result of the new form or the increased clickthrough from the meta description.
The best way to solve this is by appointing a testing lead. This person’s responsibility is to oversee and manage all the testing that you are doing to your site. Not only does this prevent the potential of multiple tests impacting each other, but it also gives your organization a go-to person for any testing-related questions.
Reason #2: Clarifies the value of testing
Some organizations don’t have 100% buy-in when it comes to testing. No doubt there’s value in it, but someone with no knowledge may see it as unnecessary. By clearly communicating the results of your tests, you can back up its value with hard numbers.
You may not be able to sell someone in your organization on designing a page multiple times, but there’s no argument against better results.
Never stop testing
Chances are, you’ll never find the best possible outcome. There are countless things you can test, and in almost all cases, something can be improved.
This being the case, it’s crucial to realize that A/B testing is a never-ending journey. Something that works great one year might see diminishing returns the next year.
So, develop a long-term testing plan that helps your business see continued results. It’s like my corny high school basketball coach would say: “The biggest room in the world is the room for improvement.”