Clients that have purchased 'Standard' or 'Premium' Analytics will see the Analytics feature on the left-hand panel. Under the Analytics section, you will see the Overview tab. Please contact support@emtrain.com for any questions or to learn more about our analytics packages.
Where does the Attrition Barometer come from?
Historical client and learner data are used to predict 6-month attrition rates. Rates are based on social dynamics in the workplace as reported by learners who have taken Emtrain programs and lessons. It is powered by a machine-learning model, hereafter referred to as the Attrition Model. For best results, continuously deploy lessons.
What is this Attrition Barometer actually predicting and how?
Let’s start with a very high-level overview of what is happening:
- An Emtrain learner takes a lesson and answers a question. Any lesson, any question… so long as the question is one of the many Likert questions currently meeting minimum historical response thresholds.
- Historical learner responses are aggregated to create a series of model features designed to quantify a learner’s experience of the workplace.
- These model features are used by the Attrition Model to predict how likely the responding learner is to exit the company within the next 6 months.
- Scored responses from the latest 6 months are aggregated and the percent of learners at risk of attrition is reported in the client’s Attrition Risk Barometer.
But some of your questions are positively-phrased and some are negatively-phrased...what do you do about that?
You're right, our questions are not always framed the same way. So how do we treat the response data?
For positively-worded questions (e.g., I feel respected in my workplace), a response of Strongly Agree (7), Agree (6), or Slightly Agree (5) is considered "Healthy."
- In these cases, Strongly Agree would be a very healthy response and Slightly Agree would be a somewhat healthy response.
For negatively-worded questions (e.g., In-group/out-group dynamics cause conflict in my workplace), a response of Strongly Disagree (1), Disagree (2), or Slightly Disagree (3) is considered "Healthy."
- In these cases, Strongly Disagree would be a very healthy response and Slightly Disagree would be a somewhat healthy response.
What inputs does the Attrition Model use?
The Attrition Model is presently comprised of a total of 12 inputs. These inputs primarily revolve around a learner's response pattern in the previous 6-month period. In our Attrition Model and overall scoring, we define "Healthy" responses in the following manner:
The Attrition Model determines the frequency with which individuals responded with very healthy, slightly healthy, slightly unhealthy, or very unhealthy responses. It uses these frequencies to create several probabilities, which become inputs for the Attrition Model's predictions.
In addition to considering the learner's inputs, the model also considers client-level response patterns, as well.
Can you explain why you are calculating inputs at both the learner and client-level?
Each of these different levels gives us different valuable information about a learner’s experience in the workplace, which help the Attrition Model find patterns in learner experiences that tend to lead to attrition risk. Of course, we start with a learner's response history to learn about their experience. But that alone is not always informative. Consider two examples:
- For some industries, a large number of people have pretty negative experiences but remain in their jobs for a variety of reasons. For a learner in an industry like this, the learner’s history of negative response patterns may not indicate attrition risk.
- Conversely, some companies are super awesome and everyone is very happy. At a place like this, a neutral response may be more indicative of a problem than a negative response in the prior example.
For each of these cases, the experience across the company as a whole gave us valuable information we could use to give more context to the learner’s individual experience. This is just one scenario in which the Attrition Model can improve predictions by having both learner and client-level features.
In the real human world, social interactions involve multiple people - if a machine learning model makes predictions based on social dynamics using information from only one learner, the predictions would probably not be very good because the model would be missing key information about the interactions. By using client and learner-level features, the Attrition Model can make better predictions because the picture of reality we’ve modeled for it is closer to the human reality of social interactions.
Can you go more into depth about how more or less healthy responses inform the model?
Let's use this example question:
In the above question, selecting ‘very often’ indicates a really unhealthy workplace. Conversely, selecting ‘very rarely’ would indicate a really healthy workplace. While that’s often true, it can get a bit more complicated than that in the real human world. Some people are optimistic by nature, so they might always respond pretty positively. For them, a neutral response may be quite negative relative to their usual behavior and indicate concern. The opposite is also true though, some folks have a more ‘glass half empty’ view of things.
A lot of people just end up responding neutrally to survey questions, regardless of how they actually feel. This can happen for a lot of reasons, for instance, some folks don’t like to commit to extreme statements, so they answer more neutrally. The tendency to respond centrally is so common, in fact, that they gave it the fancy name Central Tendency Bias!
All of these reasons - glass half-full folks, glass half-empty folks, glass half folks, Central Tendency Bias - contribute to why we consider the intensity of responses in our model. By comparing how often a learner responds negatively, positively, and neutrally, the Attrition Model can build an understanding of how the responding learner experiences their environment.
Does this model consider other factors that might impact attrition?
Attrition can be caused by a whole bunch of stuff, but the Attrition Model is just predicting the subset that stems from social dynamics in the workplace. Money, location, and career growth are all examples of things that could also lead to attrition that we don’t capture with our model. We only measure risk of attrition related to things Emtrain can measure and help improve. That doesn’t make our models wrong in those cases. It simply means the cause of attrition most likely wasn’t related to workplace social dynamics.
That’s great. Can I get a recap to tie all this stuff together?
Sure thing.
- A learner goes through a lesson and responds to Likert questions.
- When a learner responds to one of the questions included in the Attrition Model, we go through historical data to determine the probability a learner will respond in various ways.
- We calculate a total of 12 different model features so we are sure to cover how learner and client experiences differ and relate!
- Those model features are then fed as input into the Attrition Model, which gives us a prediction of whether the responding learner is likely to attrit within the next 6 months.
- All of a client’s responses scored by the Attrition Model in the last 6 months are aggregated. The percent of a client’s learners at risk of attrition within the coming 6 months is reported in the Attrition Barometer.
What about new hires? How do they affect attrition risk?
There is no evidence to suggest that new hires are more or less likely to attrit within the coming 6 months than learners with longer tenures. While a learner is unlikely to quit within their first 90 days of employment relative to more tenured learners, attrition rates in learners employed at least 6 months are no lower than more tenured learners.
The performance of the Attrition Model does not differ between new and tenured learners. This means the Attrition Model is equally good at predicting attrition risk in new hires and tenured employees. While it can be debated how well a new learner can immediately evaluate a company’s culture, model performance shows that their responses can be relied upon to accurately predict attrition outcomes.
If a learner responds unhealthily to an “attrition question” are they more likely to attrit?
No, not necessarily! Hopefully, walking through the different levels of model features helped you understand how complex modeling social workplace interactions is. The problem is, computers are not very intelligent - they know nothing about the complexity of human social interactions. So we need 12 different model features to give the Attrition Model model enough understanding of social interactions to make predictions.
That said--while computers are not very intelligent in a social sense, they are quite ‘smart’ when it comes to numbers and computations. And because computers are really smart with numbers, the Attrition Model can find relationships between model features that humans probably wouldn’t ever even think to look for. So while it’s tempting to make assumptions about how the Attrition Model uses each feature to make predictions, the actual predictions are based on interactions that may make no intuitive sense to our human brains. Isn’t it fun how we combined the power of our human brains with the powers of the computer brain to solve problems? Yay Machine Learning!