When training isn’t enough: build validation into your training and eliminate the guesswork

This post was first published on my Medium blog—follow me there for the most up-to-date entries!
The assumption has always been simple: Training = Performance. Healthcare has never been short on training. Every nurse, physician, and other licensed personnel can point to the endless stream of mandatory modules, in-services, and annual checkoffs that fill the calendar. If people complete the training, performance will improve — or so we tell ourselves. But history shows otherwise. That’s why we need to talk about validation in medical device training.
For medical device companies, the assumption is even stronger: once clinicians are trained — if they are trained at all — the device will be used correctly. That assumption feels safe, but it is rarely tested.
The reality: mixed results in training outcomes
But the evidence doesn’t always line up neatly with that assumption. Some studies show that training improves short-term knowledge scores, but those gains often fade quickly without reinforcement. Other research finds little to no change in clinical behavior after mandatory training sessions — especially when the training is passive, rushed, or disconnected from real-world practice. In a few cases, training even creates a false sense of confidence: staff believe they know what to do, but can’t perform the task under pressure.
So, does training “work?” The honest answer is: sometimes yes, sometimes no. Training can be effective, but it is not automatically effective.
McCluskey and Lovarini (2005) found that education improved allied health professionals’ knowledge but not their behavior.
Ecker et al. (2022) showed that provider training alone did not lead to sustained uptake of evidence-based practices.
Simulation studies paint a more hopeful picture: Alanazi, Nicholson, and Thomas (2017) found improved knowledge, skills, and confidence among students.
But systematic reviews (Ryall, Judd, & Gordon 2016; Wittig et al. 2024) caution that even simulation doesn’t guarantee transfer into real-world performance without ongoing validation in medical device training and reinforcement.
This is the pattern: training helps, but it doesn’t always stick. And “doesn’t stick” can mean safety risks for patients, liability for companies, frustration for clinicians and added strain on hospitals striving to meet national standards.
The missing piece: why validation in medical device training matters
That gap between training and practice is exactly why validation in medical device training matters. Training tells us what people were exposed to. Validation tells us whether they can actually do it.
Without validation, training is just a presentation. With validation, it becomes a process that confirms competence and protects everyone involved.
Three domains of competence
True competence isn’t just one thing. It lives in three domains, all of which matter when we’re talking about medical device use:
- Cognition (knowing). Usually tested with quizzes, case questions, or the old “paper-and-pencil” exams. Useful, but limited. (I’m not the only nurse who can ace a test and still stumble at the bedside.)
- Psychomotor (doing). Hands-on skill with the device. Often “checked off” on a skills list — sometimes after demonstrating it once, sometimes after just nodding when asked. Shortcuts and bad habits are easy to miss.
- Affective (valuing). Motivation and attitude are the hardest to measure, but they matter. Think consistency (do people stick with correct use?), self-reflection (do they see the device as worth using?), and peer insight (what do others observe about their attitudes in practice?).
Effective validation in medical device training doesn’t assume competence in any one domain. It looks for proof across all three. That might mean testing knowledge with questions, observing skill with a return demonstration, and checking whether motivation shows up in day-to-day practice.
Validation vs. “just training”
It’s tempting for employers or companies to believe that rolling out a training module or holding a product demo is enough. But a program without validation is like an exam without a grade. You know someone showed up, but you don’t know what they took away — or whether they can act on it when it counts.
Validation in medical device training closes that loop. It doesn’t just say, “We trained them.” It demonstrates, “They can use this device safely, effectively, and consistently.” That difference is huge for patient safety, clinician confidence, and company defensibility.
Training ownership and accountability: Where does your training stand?
With mixed results about training effectiveness, employers and medical device companies can’t afford to assume their programs are working. The real questions are:
- Does your training actually exist, or is it still “in the works?”
- Who owns the responsibility for auditing or monitoring its effectiveness? Or does it drift along with no clear accountability?
- If you had to describe your training today, which word would fit: Effective? Ineffective? Proven? Patchy? Uncertain? Not even on the radar?
Every organization has to face those questions eventually. And the sooner you ask them, the safer, stronger, and more defensible your workforce and your product will be.
Validation in medical device training isn’t just an add-on. It’s the difference between hoping your program works and knowing it does.
This post was first published on my Medium blog—follow me there for the most up-to-date entries!