Cardinal Path

Simplified Personalization: 6 Ways to Make Your Testing Achievable & Effective

Optimization testing can be complex – with multiple stakeholders and activities in the mix, you need to know a lot about a lot when it comes to online experiments. Complexity is a common barrier for many organizations wanting to move forward with testing activities. This would be the real problem, since without testing & optimization, businesses are leaving a lot on the table: revenue, brand loyalty, increased lifetime value, and a deeper understanding of your customers to better serve them.

Here are some ways to simplify your testing activities, leave your business stakeholders wanting more, and keep it achievable.

1. START SMALL

If you are new to testing, start small and grow as you become more comfortable. It will take time to become familiar with how testing tools work, solidify internal processes and observe what kinds of tests are going to have a significant impact.

The standard recommendation when starting out is to perform a simple split test whereby you create a variant of an existing page and change only one element (e.g. a benefit statement, call to action text, hero image). This exposes you and your stakeholders to all the different activities involved with testing, while still offering the potential to achieve meaningful results.

For instance, in one particular test for a client, we tested the call to action ‘Learn More’ versus ‘Buy Now’  for a new product on the company’s Home Page. Without getting too specific, if the company had left ‘Buy Now’ on the Home Page, they would have lost hundreds of thousands of dollars in sales. It turned out that Visitors to the Home Page wanted to learn first before buying and being asked to buy immediately was seen as too aggressive.

2. AVOID TOO MANY SIMULTANEOUS TESTS

Once you start to see the value of testing, it is not uncommon for excitement to become a key factor in decision making. This can often result in multiple tests running simultaneously, which means that tests may interact with each other. This can make analysis and deriving confident conclusions difficult.

There is no “perfect” number of tests to be running at once. The key is to avoid conflicts – situations where a test in one area could impact the performance of a test in another area. A simple example of this is if you were conducting two tests – one on your Home Page and one on a Product Page. If the Home Page test begins to drive more traffic to the Product Page, your test results for the Product Page may be impacted. Another great example (and this does happen) is when people want to perform simultaneous split tests on the same page.

Conflict mitigation and avoidance in the testing world is such an important and discussion-worthy topic, it will be addressed in detail in  a future article.

3. DOCUMENT YOUR TESTING STRATEGY

We always recommend that you document your testing strategy (or any strategy for that matter). This helps to establish and continually affirm that everyone is working towards a shared goal.

A useful framework comprises the following steps:

Business Problem
State the business problem your test is designed to resolve. A good starting point is to consider the business or brand goals that the test aims to improve upon.

For instance, how are we going to reduce cart abandonment rates in our checkout process?  

Research Question
Articulate what concern or specific issue you are trying to address with your test. This should be guided by your business problem. Using the example above, you may form the research question:

Why are people leaving our checkout process at the shipping detail page?

Hypothesis
People familiar with traditional market research will know the difference between the null hypothesis (no significant difference) and alternative hypothesis (significant difference). To simplify this stage, you can choose to state both or one – whatever works for your situation. However, the key is that you are stating your expected difference from the test. For instance:

There is no difference between showing people all shipping options at once versus showing them the recommended method on the shipping detail page.  

Test Type
Write down your preferred testing method – for example, split test or multivariate test. This is critical for the next step.

Stimuli
Briefly describe the stimuli you plan on using, including the control page (original page) and test variants. For instance:

We will be using the current version of the shipping detail page and introduce a new variant that displays a preferred shipping method with a call to action for the user to click to see more shipping options.

Sample Size & Time Estimation
To properly calculate the appropriate sample size for your test (in order to feel confident about the results) traditionally required some background in statistics. However, there are a number of statistics tools that will perform the calculations for you. You will simply need to gather a few data points, such as your existing conversion rate, preferred minimum detectable difference and preferred level of confidence (typically 95%).

Once you have your calculated sample size for each variant, you can then calculate the estimated time duration. For instance, if you know you need 14,000 Visits per variation on the shipping detail page to see significant observable differences and that page currently receives 1,000 visits per day, it will take approximately 28 days to complete the test (assuming traffic is split 50% between each variant).  

4. USE A TESTING SCORECARD

Testing ideas often come from multiple sources, and this is especially true once you are able to demonstrate the value of your online experiments. We recommend developing a scorecard to determine the priority of different test ideas. Some useful variables to score against include:

  • Impact – What is the proposed test’s likelihood of impacting key business or brand goals?
  • Feasibility – How easy is it to implement a proposed test?
  • Extrapolation – Can the results of the test be applied to other aspects of the user experience?
  • Urgency – Are there internal or external pressures to get to market with a test (e.g. seasonality)?

5. DEVELOP A KNOWLEDGE SHARING PROCESS

To keep stakeholders supporting testing activities, it is useful to establish a process to share the results of tests. This can include:

  • Internal email summaries sharing test results
  • A repository that houses all testing activities and outcomes
  • Retroactive meetings to discuss what went well and what needs improvement        

Not only will this approach help to provide clarity on the value of testing activities, it will also drive them to become more involved on future testing activities.

6. MEASURE YOUR TESTING PERFORMANCE

This does not refer to measuring test results, instead it relates to how to measure your company’s testing activities. Example metrics for measuring a team’s performance include:

  • Velocity – How many tests are launched within a specific time period?
  • Turnaround – How long does it take to launch a test once a hypothesis has been determined?
  • Impact – What percentage of your tests resulted in a positive difference being observed?

Use whatever metrics your team determines to be relevant to measure and optimize your team’s performance over time.

What kind of steps does your organization take to simplify their testing activities?

Learn more with this on-demand webinar:Five Personalization & Testing Strategies to Upgrade your Customer Experience

Clayton Mitchell

Share
Published by
Clayton Mitchell

Recent Posts

Optimizing user experiences with Digital Experience Analytics (DXA) platforms

As consumers become increasingly digitally savvy, and more and more brand touchpoints take place online,…

2 months ago

Enabling Value-Based Bidding with Google Tightlock

Marketers are on a constant journey to optimize the efficiency of paid search advertising. In…

2 months ago

Resolving “Unassigned” Traffic in GA4

Unassigned traffic in Google Analytics 4 (GA4) can be frustrating for data analysts to deal…

3 months ago

This website uses cookies.