How is this data collected?
All questions and responses are tied to training content your learners have completed. Only questions that meet the response thresholds will appear in score calculations.
Why are all my scores reported as percentiles?
All scores shown in your analytics dashboard are benchmarked percentiles versus the other clients with data for that question/Culture Skill/Competency during the previous 365-day period. Emtrain uses percentiles in lieu of raw scores because they provide a standardized way of comparing individual performance across different groups or populations. This provides an organization more context so they can better understand the health of their organization. For example:
- Consider Client A, where 90% of learners chose "Slightly Agree" for the question I feel respected in my workplace. Without additional context, that client might assume their organization is in a very good place, since most people somewhat agreed with the sentiment.
- However, after benchmarking against other clients responding to the same question, this client may actually be in the 26th percentile, since oher clients had a larger percentage of learners responding "Agree" or "Strongly Agree" to I feel respected in my workplace.
- In this case, while Client A may have a healthy workplace – after all, 90% of people in slight agreement is no small feat – their percentile shows they still have room to grow and improve, given the experiences of learners at other organizations.
Percentiles are commonly used in education, employment, and other areas where comparisons of performance or achievement are necessary.
How do I interpret a percentile?
A percentile represents the percentage of individuals who scored below a particular score or achievement level in a given group or population. For example, if a client scored in the 80th percentile for a particular question, it means their learners' responses were healthier than 80% of the clients in Emtrain's portfolio that answered this question during the previous 12-month period.
A higher percentile indicates a higher level of performance or achievement relative to the group or population being compared.
How much data do I need to show my scores?
For a question to populate, each threshold must be met during a 12-month period:
-
At least 50+ learners if <200 total learners in the organization OR 100+ learners if 200+ total learners in the organization have answered a particular question
- 30+ companies meeting criteria 1 answered a particular question (in order to benchmark)
For an Culture Skill to populate, each threshold must be met during a 12-month period:
- 2 or more questions associated with that Culture Skill must be populating
- These questions must have enough responses to meet the question thresholds above
For an Competency to populate, each threshold must be met during a 12-month period:
- 4 or more questions associated with that Culture Skill must be populating
- These questions must have enough responses to meet the question thresholds above
- 2 or more Culture Skills must be represented, even if they are not fully populating (e.g. If the Culture Skill "Mitigating Bias" only has 1 question populating, it will not show as an Culture Skill; however, it will still contribute to the required threshold of 2 Culture Skills being represented for populating a the Respect Competency).
Wait, so if Emtrain doesn't have enough other client responses, I can't see my scores for certain things?
Unfortunately this is the tradeoff with benchmarking. In order to be able to make comparisons, we need other clients with which to compare. As a result, scores do not show unless 30 clients have qualifying data. The good news is – as we grow our client set and identify our priority question set, this should become less of a problem in the future.
What do you do about those negatively worded questions? It makes your percentiles confusing!
You're right, our questions are not always framed the same way. And we recognize that percentiles can be a little counterintuitive when the question is negatively-phrased. So how do we treat the response data? We flip the responses for negatively worded questions before scoring them.
- For positively-worded questions (e.g., I feel respected in my workplace), a response of Strongly Agree (7), Agree (6), or Slightly Agree (5) is considered "Healthy."
- For negatively-worded questions (e.g., In-group/out-group dynamics cause conflict in my workplace), a response of Strongly Disagree (1), Disagree (2), or Slightly Disagree (3) is considered "Healthy."
For example, with In-group/out-group dynamics cause conflict in my workplace, we would want to see a large volume of learners selecting Strongly Disagree and Disagree, and NOT Strongly Agree or Agree.
Regardless of how the question is worded, the 100th percentile will always represent the best score. Clients in the top percentiles have a high percentage of learners responding with the healthiest answer option.
But I have so many new hires, so this data isn't very helpful.
Your new hires perceive more than you know – based on both their preliminary interactions, as well as from their interview process. Just like you would include your new hires in your routine employee engagement surveys, their voice also matters in data collected here because it contributes to a holistic understanding of your organizational health. That said, we also expect scores for some groups to change after 90 days. To understand the data better and understand next steps, please reach out about adding segmentation through our Premium Analytics offering.
Can we see our industry-specific benchmarks?
Yes! Take a look at the HR & People Risks or Business Compliance Risks tabs. These pages are laid out differently, but they allow for industry-specific comparisons.
How does this over time graph work?
Glad you asked. Each point on the line graph represent one year's worth of data - i.e. all the responses collected from your learners in the 365-day period preceding that specific point on the line. These responses are aggregated and benchmarked against all other clients with enough data for those same questions/Culture Skills/competencies.
In this way, the benchmark does change on a daily basis - as new learners and clients are added, and old learners and clients fall out of having enough data to meet the required thresholds. That said, many of our questions have a large enough client-base that the benchmark remains fairly stable. In these cases, large changes in scores can generally be attributed to changes in responses from the individual client and not changes to the overall benchmark.