Skip to content

Conversion Rate Optimization Tips: How to Prioritize Tests

You know your business goals. You’ve run the audience research and developed great hypotheses. You’re all set to test, but you have too many tests to choose from. How do you decide on which ones to prioritize?

When prioritizing, it helps to consider:

  • Impact
  • Confidence
  • Scope
  • Cost
  • Timeline
  • Dependencies
  • Risk

Impact

Impact is an estimate of how much a winning test will increase your conversions. It’s usually what people think of first when prioritizing tests.

You won’t know the impact of a test before you run it. If you knew that, then you wouldn’t need to run tests. But you can make an educated guess based on what you’re testing and its influence on conversions.

Big-Impact tests are more attractive than low-impact ones as they move the business forward faster.

Confidence

Confidence is a measure of how certain you are that the test will succeed.

Confidence is based on the amount of data you have to support the test idea.

Maybe you have a lot of data that suggests this change is going to work. In which case you’d have high confidence for the test. 

On the other hand, maybe it’s a hunch. The data suggests this might be a good idea, but it’s weak data and you don’t really know.

Depending on where you work, it’s usually okay (and sometimes necessary) to run tests with low confidence. But generally speaking, it’s a good idea to mix in high-confidence tests so you avoid a string of poor outcomes.

Scope

Scope is an estimate of how much work is needed to complete a project.

A headline test is usually a low scope test. You might be able to write the new headline and set up the test by yourself.

At the other extreme, a total product page redesign is likely to have a high scope because you need a designer, copy writer, developer and QA. Likewise completely reworking a purchase flow is often a very high scope project.

For most conversion rate optimizers, resources are limited. It’s a good idea to preserve resources for the tests that really matter. A scope estimate can help you do that.

Cost

One way to measure cost is by estimating how much money is required for a project. Costs can include purchasing software or hiring specialist consultants.

Costs can also refer to opportunity costs. If you run the test, what other projects will be delayed and for how long? There are usually more projects than resources, so this is a common problem.

Timeline

How long will it take to get the project live?

If it takes a very long time to get the project live, it may be worth adapting or abandoning it. 

With a very long timeline, business needs, requirements, and the competitive landscape might shift enough that your test is no longer relevant by the time it launches.

Dependencies

Dependencies between pages and sections on the site can also influence priority lists.

Suppose you were optimizing a site with poor product pages and a terrible purchase flow. Which do you fix first?

Your best strategy would probably be to fix the purchase flow first. 

If you improve the product page first, then the test results are likely to be depressed by the poor purchase flow. Some percentage of users will simply give up when confronted with a terrible purchase flow. 

In this case, it will be harder to get an accurate read on the impact of the product page changes. In extreme cases, it might even appear that there was no improvement.

But if you improve the purchase flow first, then you can be confident that the purchase flow isn’t artificially depressing your results.

In this case, the product page changes are dependent on fixing the purchase flow.

While I’ve simplified the situation in this example, the logic holds true. A site is a collection of pages that work together. Consider dependencies between pages and site sections when building your priority list.

Risk

Every site change has risks. Depending on the change, the risk can be anything from minor to serious.

Before you launch any change, consider the possible negative impacts. For instance, could the test versions negatively impact your audience’s perception of the company?

Also consider any internal political risks. If the test fails, will that be a problem? If it is going to be a problem, is it worth running a few “safer” tests first?

Bonus: What will you learn?

I’ve recently added “What will I learn?” to my list when thinking about prioritization.

Testing teaches you about your audience. If a test can teach you something valuable about your audience’s motivations or goals, it’s worth considering that when prioritizing. Especially if it is a key question that could alter your testing roadmap.