Experience Score is a metric available across EdgeTier. The score is the result of an aggregation of different signals in your interactions, such as the amount of frustration, praise and gratitude detected in customer messages.
<aside>
š£ļø
An Experience Score quantifies the overall quality of a user or customer experience. In this case, it quantifies the overall quality and ease of your customer interactions on EdgeTier, allowing you to visualise this score, and changes in it.
</aside>
Key Information
- Experience Score is based on qualitative data as well as quantitive, so its calculation is not a simple formula. The final āscoreā comes from a machine-learning model. The model is trained on past interactions where the true NPS (or a labelled tag like good/neutral/bad) is known. The model identifies which signals in those interactions are most predictive of satisfaction and then uses that to estimate an NPS for new interactions. Unlike a formula with fixed weights, it automatically learns the relative importance of different signals within each interaction.
- Net Promotor Score (NPS) is another metric stored on your WatchTower, and it is typically used in these calculations where available. NPS is taken from your interactions, and is usually the response to a survey question āHow likely are you to recommend āyour businessā to a friend?ā.
Calculation
- Not a formula: The Experience Score isnāt calculated from a simple set of weights (āx% sentiment + y% resolutionā). Instead, itās a machine-learning model trained to recognise patterns in thousands of past interactions.
- Predictive, not prescriptive: The model predicts a customerās likely satisfaction based on signals in the interaction. Itās filling the gaps where surveys arenāt completed, effectively giving its own āestimatedā NPS for every interaction.
- Automatically learned weights: Rather than a preset importance of each metric, the algorithm works out which signals best predict the true NPS, and by how much.
- Labelled data: Labelled data means interactions with a real survey score attached, or manual āgood/neutral/badā tagging. Thatās what lets us train a custom model to learn what "good experience" is per account.
All metrics are not weighted equally. For example, things like frustration typically have a higher weighting in the model. If you decide to train a custom model , we can analyse which features are the most important based on your data.
The model uses a 0ā1 output scale.Ā To map the final result to the 0ā10 Experience Score, just multiply by 10.