The goal of web analytics is to gain insight into website performance. In order to achieve this, we invest time and resources into analyzing KPIs and answering questions about user experience: Is the homepage performing well? How quickly are users exiting the site? How do we get them to stay longer? These data-driven insights have the potential to make a significant impact on on-site experience and even on a company’s bottom line. However, it is easy to get caught up trying to improve a certain key metric rather than trying to improve customer experience as a whole. KPIs are an extremely useful tool for analyzing website performance, but they need periodic evaluation to ensure they are actually representative of customer satisfaction.
Voice of Customer surveys are a great way to receive direct feedback from customers and gain an understanding of what drives satisfaction. These surveys are used across business units, but in this case, we are talking about VOC surveys that are served to website visitors and collect information about on-site experience. In this post, I will go over how to analyze VOC data in your web analytics tool & how it can provide a better understanding of your KPIs.
Laying the groundwork
The first step of this analysis is setting up a VOC Survey on your website that allows users to rate their experience on a number scale and add comments. Next is implementing a third party data integration in Google Analytics or Adobe Analytics. For simplicity’s sake, I won’t go into detail on these topics but there are plenty of other blog posts with tips on these implementation steps.
Once the VOC data is integrated with your web analytics tool, you can start formulating questions you want answered. A good place to start is asking:
- What attributes are correlated with high satisfaction? What attributes are correlated with low satisfaction?
- Looking at breakouts by last touch channel, device type, new vs returning, geo (of page), geo (of user) will provide insights into what types of users are most satisfied & reveal areas that need improvement.
- What actions are correlated with high satisfaction? What actions are correlated with low satisfaction?
- Analyzing on-site behaviors allows you to see how specific actions impact customer satisfaction. Some examples are search, download, watch video, login, add to cart, visited x page or y section.
Analyzing the Data
This type of analysis can be done through Google Analytics or Adobe Analytics, but the examples & screenshots in this post will be using Adobe (and specifically, Adobe Workspace, Segments, and Calculated Metrics).
Before you begin, you will want to create a few segments & calculated metrics
(1) A segment that filters for visits where a survey response exists. The exact segment will depend on what method you used for implementation, but here is the one I used:
Visit where survey response exists
(2) Segments that filter for positive, neutral, and negative survey responses. Again, these will vary based on the setup of your survey, but an example is:
Visit where survey response = “5-Very Satisfied” or survey response = “4-Satisfied”
(3) Calculated metric that gives you an average customer satisfaction score.
(Survey Response=5-Very Satisfied X 5) + (Survey Response=Satisfied X 4) + (Survey Response=Neutral X 3) + (Survey Response=2-Dissatisfied X 2) + (Survey Response=5-Very Dissatisfied X 1) / (Visit where survey response exists)
(4) Segments that filter for key actions you want to analyze
Next, create tables to see how the key attributes & behaviors impact customer satisfaction. For a high-level look, use the calculated metric:
For a more detailed look, use the positive, neutral, and negative responses segments. This will allow you to make more informative data visualizations.
A Big Picture View of Customer Satisfaction
After building out tables and visualizations, you will have some interesting takeaways about what is driving customer satisfaction. At this point, it is important to take a step back and see if this impacts how you currently measure performance.
For example, let’s say your analysis revealed that time on site was correlated with customer dissatisfaction. If we circle back to one of our initial questions – how long are users staying on site and how do we get them to stay longer – it is now clear this was not the right question to be asking. If a user is spending more time on the site, it may be because they are confused, not that they are extremely interested in the content; therefore time on site is not the best success metric for your site.
It is also important to think through causation. Since correlation does not always mean causation, instead of using your findings as a way to draw conclusions, use them to find more questions to ask. Let’s say we found that searching was correlated with dissatisfaction. Your first instinct may be to revamp the search results or search UX. While it’s possible that the search interface is the issue, searching could just be a symptom of an already unsatisfied customer. You can dig into this further by looking into what keywords users are searching and at what point in the visit they are executing these searches. This will clarify the question of causation.
Next Steps
After this initial deep dive analysis, you will walk away with some valuable insights in addition to many actionable questions. The next step will be making optimizations to areas that are clear cut and running tests on the areas that are more ambiguous.
Continue to use the custom segments & calculated metrics that you created for this analysis in your ongoing reporting to keep an eye on customer satisfaction.