On two things we often confuse—yet they don’t tell the same story.
The first is simple: someone completed a module.
The second is far more important: can that person now do what we expect them to do?
Put like that, the gap seems obvious. In practice, it’s much less so. And that small slip, quietly, can be costly. Not always in a visible budget line. Sometimes the bill shows up somewhere else: in the field, in quality gaps, in a tough audit, in HR decisions based on data that’s too flattering to be truly useful.
So no, the question isn’t just: which KPI should you track?
The real question is more: what exactly do you need to prove?
If the goal is to demonstrate that mandatory training was made available and completed, the completion rate is perfectly suitable. But if you’re trying to verify real appropriation (ability to apply, to decide correctly, to avoid mistakes, to adopt the right behavior at the right time), then you need to change focus. You’re no longer managing distribution. You’re managing competence—and that requires truly usable competency e‑learning KPIs (or mastery indicators).
Completion: a useful e‑learning KPI when we’re really talking about distribution
The completion rate measures, roughly, the share of learners who finished a module according to the LMS rules: last step reached, pathway validated, sometimes a final quiz cleared. It’s readable, quick to use—and yes, it’s useful.
But useful for what? For knowing who finished.
Not for knowing who actually masters the topic.
The nuance seems small on paper. In reality, it changes everything. In some contexts, you don’t need more. When the objective is traceability, completion does its job very well.
It’s prioritized in fairly classic cases:
- regulatory or compliance training: GDPR, anti‑corruption, basic safety, harassment prevention;
- administrative onboarding, where you need to ensure that internal rules, procedures, and reference points were properly distributed;
- change communications, for example when rolling out a new process or tool, when the priority is to reach everyone.
In those cases, a good completion rate has real value. It documents coverage. It helps identify stragglers. It supports follow‑ups.
Where it goes off the rails is when you start making it say something else. A high completion rate does not, by itself, prove that learning occurred. Even less transfer.
Finishing a module is not proof of competence
A module can show 100% completion and produce, in the field, roughly zero visible effect. That’s not rare. It’s actually pretty common.
We know the scenario. The module runs in the background between two meetings. The learner clicks fast, answers a bit at random, passes the quiz because they spot the right answers by elimination—or because they’ve seen that type of question ten times already. The LMS reports pristine data. The reporting does too. But on the business side, it remains shaky.
Three frequent (and very concrete) traps
- The automatic pathway: when a module is too linear, too predictable, it gets consumed without real engagement.
- Fragmented attention: being connected to the module doesn’t mean being mentally available.
- The SCORM confusion: in SCORM,
completedandpassedare not the same thing.
In SCORM 2004, ADL clearly distinguishes completion_status and success_status. Depending on the LMS and how the module was configured, the two statuses can live separately.
To go to the source: SCORM, official ADL documentation.
A very concrete consequence: you can have 90% completion and, a few weeks later, see that critical errors haven’t moved an inch.
It’s not necessarily that the training is bad. Often, it’s simpler—and a bit more annoying: you weren’t managing the right thing.
Competency e‑learning KPIs: how to measure skill growth (without fooling yourself)
The term is sometimes used for everything and anything. Yet skill growth is neither a pleasant feeling nor a nice exit score that reassures everyone for the duration of a committee meeting.
It’s an observable progression in the ability to act.
Not just restate a rule. Not just recognize the right answer. Act. Choose. Prioritize. Diagnose. Execute correctly. Adjust when the situation gets a bit more complex, when the context is no longer exactly the one from the module.
A skill only really shows in action. The rest—let’s be blunt—is often intermediate signals.
For a training manager, the strongest evidence looks more like this:
- success on a contextualized task;
- improvement across multiple attempts;
- a reduction in critical errors;
- stable performance across close variants of the same situation.
That stability matters a lot. Not a one‑off success on a single well‑framed question.
Examples: what really proves competence
- In sales: knowing a discovery script doesn’t mean knowing how to run a sales conversation. What matters is the quality of the questions, real listening, how an objection is handled without breaking momentum.
- In management: memorizing the principles of feedback guarantees nothing. Everything hinges on timing, phrasing, the right level of firmness, the ability to reset expectations without humiliating.
- In safety: reciting a procedure is one thing. Applying it correctly under pressure, with ambiguity or constraints, is another.
Competency e‑learning KPIs: the indicators that actually help you manage
An overstuffed dashboard often creates an illusion of control. You stack numbers, color cells, reassure yourself. But fundamentally, you don’t really understand what to do next.
Better to have fewer indicators—but good ones.
Pass, fail: the bare minimum
The first one, simply: pass or fail. A passed/failed status, or a clearly defined mastery level, already says far more than a simple completed. It lets you answer a very concrete question: who can act correctly, and who still can’t?
Scores by skill (rather than an overall score)
The overall score has a major drawback: it smooths everything out. It can hide a clear weakness on a point that is nevertheless critical. And a learner who is “average overall” can remain at risk on a gesture, a rule, or an essential decision.
Tracking certain dimensions separately (diagnosis, compliance, relational stance, listening, prioritization, objection handling) gives a far more usable reading. That’s typically where a competency e‑learning KPI becomes actionable.
Progress between attempts: the most meaningful signal
When someone goes from 42% to 74%, then confirms on a variant at a stable level, we’re no longer talking about luck. We’re observing a learning dynamic.
Time spent: useful only if you cross it with performance
On its own, it doesn’t tell a lot that’s reliable. A long time can signal real involvement—or scattered attention. A very short time can indicate solid mastery—or a quick skim.
But crossed with performance, it becomes interesting again:
- a lot of time + many failures on the same step: there’s probably a learning design blockage point;
- fast, stable success: there, we may have a solid acquisition.
Two simple frameworks to structure evaluation
- the Kirkpatrick model, which distinguishes reaction, learning, transfer, and results;
- xAPI, more relevant than SCORM as soon as you want to track fine‑grained actions, decisions, or activity traces—and not just the end of a module.
For a solid research reference, you can also consult:
- Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games (Computers in Human Behavior)
- Baldwin, T.T., Ford, J.K. (1988). Transfer of training: A review and directions for future research (Journal of Applied Psychology)
- Hattie, J., Timperley, H. (2007). The Power of Feedback (Review of Educational Research)
To decide quickly, you first need to clarify the intent (and the associated competency e‑learning KPIs)
At the end of the day, asking what’s the best indicator? is often a bad entry point.
It’s better to ask a more straightforward question: are you trying to inform, certify, or train?
From there, things become clearer. And very often, four variables are enough to frame management:
- the objective: inform, certify, train;
- the audience profile: novice, experienced, or mixed;
- the level of business risk: low, medium, critical;
- the target horizon: immediate need or durable transfer.
If the main objective is to inform (compliance, cultural reference points, general rules), completion remains a coherent indicator.
If the objective is to certify (authorization, safety, quality, execution of a job-specific gesture), then pass rates, validation thresholds, and identification of critical errors become non‑negotiable.
And if the objective is to train a behavior, a stance, or decision‑making, you need to make action visible. So you must accept less linear devices, closer to reality, that allow you to observe choices—not just clicks.
The rule is fairly simple: the higher the business risk, the less completion is enough.
Depending on the type of training, you’re not managing the same reality (H3 by H3)
Not all e‑learning programs have the same purpose. Evaluating them with the same dashboards is comfortable—and often misleading.
Compliance: completion + targeted verification
Completion remains essential. However, it’s wise to add minimal validation focused on the real risk points. Not a useless quiz on details. A few short, targeted questions that verify sensitive elements.
Onboarding: operational milestones, not just “completed”
Settling for completion is often too thin. A new hire must know where to find information, complete certain steps, use the right channels. Concrete milestones are more telling: successfully complete a key procedure, route a request correctly, mobilize the right resource at the right time.
Sales and customer relations: choices and progress
Here, competence shows in decisions. Good programs create situations, then look at what happens: which questions are asked, whether listening is real, how an objection is handled, whether the close is brought in appropriately. Skill‑based scores—and above all their progression—are worth far more than a flattering completion rate.
Management: measure stance, not memory
What matters is not only knowing the principles, but posture consistency. Clarifying an expectation, resetting boundaries without aggressiveness, listening without dodging the issue, deciding without skirting the problem: these are behaviors. Interactive scenarios, with contextualized feedback, are often more discriminating than a multiple‑choice knowledge quiz.
Safety and quality: isolate critical errors
You need to be able to identify unacceptable errors. Overall success can mask a critical fault that remains disqualifying. In that case, you primarily monitor true success, blocking failures, critical errors and, depending on the case, performance under constraints.
Moving beyond the false “module completed / not completed” duel with competency e‑learning KPIs
In many LMS environments, the most visible data remains completion. That’s normal: it comes back easily, it feeds dashboards, it reassures quickly. But it has a clear limit: it says the module was gone through—not that something was truly acquired.
The approach proposed by Serious Factory is precisely aimed at going beyond this binary reading.
With VTS Editor, it becomes possible to design realistic scenarios, gamified modules, or serious games without deep technical expertise, using varied interaction blocks: choices, quizzes, clickable zones, conditions, scores, badges. The stakes aren’t just ergonomic. It’s what makes it possible to observe a decision, a line of reasoning, a stance.
Discover the authoring tool: Design software for gamified E‑Learning modules made easy with AI.
Then, with VTS Perform, analysis is no longer limited to “completed / not completed.” You can track milestones, pass/fail, skill‑based scores, learning trajectories across multiple attempts. In other words, you build management centered on competency e‑learning KPIs.
See the LMS platform: Deploy your e‑learning courses with our LMS platform.
And then, inevitably, the questions change.
- Where do we observe real progress?
- Which skills remain fragile?
- Which learners need targeted support?
- Which part of the program deserves a redesign or strengthening?
Along the way, the conversation with managers becomes more useful. You move beyond a simple distribution logic. You talk about operational mastery.
To illustrate this type of approach, you can also browse field feedback:
To go further on formats, these pages can serve as resources:
FAQ
Is a good completion rate enough to prove a training program’s effectiveness?
No. It mainly shows that content was gone through or distributed. To talk about effectiveness, you need at least a performance indicator: success on an assessment, progression between attempts, reduction in critical errors, or a score by skill.
Which indicators should you track in a compliance audit context?
Start with population coverage and completion. Then add minimal validation on sensitive points. If the risk is high, it’s better to also track pass status—not just the end of the module.
How do you measure skill growth without making the program heavier?
No need to turn the dashboard into a Christmas tree. Two to four well‑chosen indicators are often enough: pass rate, score on a few key skills, progression between two attempts, and tracking of critical errors.
What’s the difference between “completed” and “passed” in a SCORM LMS?
In SCORM, completed means the pathway is finished. passed indicates the expected level has been reached. The two statuses can be distinct depending on the module and LMS settings. This distinction is provided for in ADL’s official documentation.
When should you leave the linear module behind and move toward simulation or the serious game?
As soon as the objective is no longer just to inform, but to get people to act correctly in a concrete context. In sales, management, safety, quality, or customer relations, as soon as stance, judgment, or decision matter as much as knowledge, simulation becomes clearly more relevant.





