Implementation
fidelity—it’s another of those academically impressive descriptions
that isn’t nearly as profound as it sounds. It relates to whether a strategy or
approach is being implemented as it was originally designed and used. Most
often it refers to replicating research, but it has important implications for practice.
Let’s say you’re onto a new strategy, one you haven’t used before. Almost any strategy will qualify as an example. It might help if you think of one you’ve actually implemented so you can explore how implementation fidelity relates to your practice.
I first
encountered the term in a terribly impressive review and critique of how we study
and think about active learning (more about this resource in future columns). Whenever
a strategy is implemented as part of an empirical inquiry, especially when the
strategy is being studied in the dynamic milieu of the classroom, the event is
unique. It involves the following set of circumstances:
- It starts with the design details of the strategy,
meaning the way it was done in the study isn’t the only way the strategy can be
executed.
- The content used in the strategy has
characteristics unique to that field. Physics and philosophy don’t configure
knowledge in the same ways.
- The students who used the strategy in the study were
a unique cohort, with age, gender, background experiences, ethnicity, and
academic ability making them like no other group.
- The course is at a certain level and may be
required or an elective, formatted as a seminar or a large lecture, and offered
face-to-face or online.
- The teacher implements the strategy with some
degree of involvement, has planned and prepared according prescribed processes,
communicates about it in a particular way, and executes it in a manner consistent
with their teaching style.
Each of
these factors needs to be thought of as a variable with the potential to affect
outcomes.
One
implication for teachers grows out of the interest in evidence-based practice. What
makes that interest laudatory is that it was absent for so long. It used to be
the teacher who determined whether something worked, often with dubious
criteria such as whether the students liked it. Now teachers want evidence. Are
there systematically collected data verifying that a certain strategy improves
learning outcomes? It’s the right question to ask. The problem is that the
research answer in any given study is based on the design of the strategy, the
content it used, the students who did it, the course they did it in and the
teacher who used it. This is why a single study or even several of them don’t
provide evidence that a strategy promotes learning across the board.
Now even
though a study can never be replicated exactly, the case is not as hopeless as
it might first appear. Researchers can exert some control over the differences,
and if a strategy continues to generate positive outcomes, the evidence in support
of it accumulates, growing into generic support for the approach.
But
there’s another an important caveat for teachers. Just because numerous studies
have reported learning gains doesn’t guarantee that they’ll accrue when you use
the strategy. You can hope, keep your fingers crossed, and believe there’s a
good chance, but that’s it. How your students respond to the strategy and how
the strategy works with the content you’ll have to find out. Moreover, if the
objective is an evidence-based practice, then you’ll have to find out by
collecting data and analyzing results. It doesn’t have to be publishable
research, but there needs to be evidence beyond what you think or feel happened—think
test scores, observable skill development, and so on. Evidence proves that a
particular result occurred. If the circumstances unique to your teaching
situation remain more or less the same, chances are good it will happen again,
but still without guarantees.
Teachers
can use a particular strategy multiple times, sometimes for years, and reliably
get the same results. There’s the lovely feeling that this strategy works
dependably. Then a course comes along and what happens isn’t at all like the
usual outcomes. We’re surprised; we can’t believe it. But given how many
variables impinge upon the success of a strategy, maybe we should be surprised
when it works dependably and not at all surprised when it doesn’t.