Clara Rangel, BS
University of Nebraska Omaha
University of Nebraska Medical Center’s Munroe-Meyer Institute
Sarah Frampton, PhD, BCBA-D, LBA
University of Nebraska Omaha
An individual can be prescribed the most effective, highly evidence-based medicine, but if the medicine is not taken as the doctor directed (i.e., right dose, right time, right frequency), it will not work as expected. In the same way, intervention effects can be negatively impacted if they are not implemented according to the recommended plan (Brand et al., 2019). Infidelity, or poor adherence to the treatment, may prevent an individual from achieving maximum results from the treatment in a clinical or educational setting (Vollmer et al., 2008).
Consider the following example, an eight-year-old boy with autism spectrum disorder (ASD) is being taught to brush his teeth at the clinic where he receives services, at his mother’s house, and at his father’s house. The clinician provided a step-by-step task analysis detailing the exact way that the child should be taught to brush his teeth in all environments. At a recent meeting, the clinician and caregivers discussed the low rate of progress on the task. To get to the bottom of the issue, the clinician conducts observations at both parent’s homes. The clinician notices that each caregiver is teaching the child to brush his teeth in a different manner. One caregiver is using model prompts (i.e., showing the child what to do for each step), which was the teaching approach recommended by the clinician. The other caregiver is using hand-over-hand prompting, which the child has been resisting, leading to disruptions within the routine. This infidelity likely explains the challenges the boy has experienced while learning the skill.
How Infidelity May Impact Research
Infidelity creates major challenges when interpreting the findings of research studies evaluating treatments (Ledford, 2018). Though research settings offer more control for infidelity, as the conditions are often carefully crafted and controlled, infidelity is always possible. If the accuracy of the implementation of treatment varies day-to-day, it will interfere with data-based decision-making. When reviewing data, it will be difficult to determine if the poor outcomes are due to the intervention itself, or if the poor outcomes are due to errors made by the implementers. This doubt introduces a threat to the internal validity of the study (for more information see Frampton, 2024), aptly referred to as infidelity (Ledford, 2018). To allay concerns regarding infidelity, authors are called to report on the procedures used to train the implementers of the study and monitor their ongoing adherence to study procedures (see below for more details). Of note, the frequency of reporting measures of fidelity has gradually increased in recent years following multiple calls for attention to this critical component of internal validity (e.g., Bergmann et al., 2023; Ledford & Wolery, 2013; Preas et al., 2024).
How to Prevent and Monitor Infidelity in Research and Practice
Though it may be difficult to implement a treatment with perfect fidelity, efforts should be made to minimize errors in both research and practice (Vollmer et al., 2008). One common way to minimize infidelity is to have an adequate, thorough description of all conditions in the treatment protocol (Ledford, 2018). If the steps are unclear to the implementer, it would logically lead to infidelity. Implementers should be trained to ensure proper implementation of the treatment. Trainings that consist of role play and practice opportunities are optimal, as these allow for the clinician to provide immediate feedback on the implementation (Vollmer et al., 2008). Appropriate feedback is critical as it provides the implementer reassurance for implementing the treatment correctly and guidance for correcting errors. If, however, fidelity remains low following adequate training, it may be wise to revisit any unclear areas. An area susceptible to ambiguity is the operational definitions (i.e., detailed descriptions) of the behaviors under study. It is important for these definitions to be clear to prevent confusion and potential implementation errors. For example, if one caregiver considers brushing teeth for 2 minutes to be the target and the other considers 1 minute good enough, this will lead to inconsistent teaching. One way of combatting this is to invite the implementer to review and make suggestions on the operational definitions or other unclear areas (Vollmer et al., 2008).
Another way to minimize the effects of infidelity is to keep track of how well the intervention is being conducted via data collection, referred to as collecting procedural fidelity data. To collect procedural fidelity data, the steps of the treatment protocol are written out, step-by-step as a type of checklist (see Figure 1). Then, a trained observer watches the implementer’s performance. The trained observer refers to the procedural fidelity datasheet to score the performance. For example, if the implementer provides hand-over-hand prompts when model prompts should be used, that step may receive an incorrect score depicted by a minus sign. If the implementer delivers model prompts (as this was indicated in the teaching plan), that step may receive a correct score depicted by a plus sign. An example of this may be found below in Figure 1. Adherence to steps can also be coded with more nuanced, specific measures capturing exactly how the implementer deviated from the intended treatment step and how often (for an example, see Codding et al., 2005).
Figure 1.
Sample Procedural Fidelity Datasheet for Toothbrushing
Step | Description | +/- |
1 | Implementer has all materials ready (toothbrush, toothpaste). | |
2 | Implementer identifies a presently motivating item or activity. | |
3 | Implementer presents the instruction, “It’s time to brush your teeth.” | |
4 | Implementer prompts all steps in the correct order as appears on the task analysis. | |
5 | Implementer provides model prompts and only uses hand-over-hand if an error occurs. | |
6 | Implementer provides praise and the motivating item or activity following best performances. | |
7 | Implementer says “You can try again” following performances below target level. | |
8 | Implementer collects data within 1 minute of completing the task. |
In our example above, the clinician could use the procedural fidelity data to pinpoint steps with infidelity and then facilitate a discussion regarding the steps in the task analysis. As described above, these data will help to clarify that a caregiver was using an incorrect prompting level, resulting in a minus score on step 5. After discussion, the clinician clarified that overly invasive prompting may result in a child attempting to escape the task, resulting in disruptions to learning. This data-based collaboration between the clinician and implementer will lead to greater clarity of the protocol and higher procedural fidelity in the future. Ultimately, greater fidelity will result in more optimal outcomes for the child (Brand et al., 2019).
Conclusion
Providing excellent quality care to clients and their families is critical both in research and in real-life settings. A way of securing this is by ensuring that interventions designed to help the client and their families are being implemented correctly (Vollmer et al., 2008). Failure to follow an intervention properly is known as infidelity, which can lead to less-than-optimal outcomes. Infidelity can be combated in several ways, such as providing detailed descriptions of the conditions, training implementers, and collecting procedural fidelity data. For researchers evaluating treatments for individuals with ASD, we echo the recommendation from Ledford and Wolery (2013) regarding the measurement of procedural fidelity, “It should be (a) measured, (b) measured broadly (across variables, conditions, participants, and levels of implementation), and (c) measured precisely (with counts derived from direct observation,” (p. 190). By taking proper precautionary measures, infidelity can be minimized and quickly detected, leading to better outcomes for clients and their families (Brand et al., 2019).
References
Bergmann, S., Long, B. P., St. Peter, C. C., Brand, D., Strum, M. D., Han, J. B., & Wallace, M. D. (2023). A detailed examination of reporting procedural fidelity in the Journal of Applied Behavior Analysis. Journal of Applied Behavior Analysis, 56(4), 708-719. https://doi.org/10.1002/jaba.1015
Brand, D., Henley, A. J., DiGennaro Reed, F. D., Gray, E., & Crabbs, B. (2019). A review of published studies involving parametric manipulations of treatment integrity. Journal of Behavioral Education, 28, 1-26. https://doi.org/10.1007/s10864-018-09311-8
Codding, R. S., Dunn, E. K., Feinberg, A. B., & Pace, G. M. (2005). Effects of immediate performance feedback on implementation of behavior support plans. Journal of Applied Behavior Analysis, 38, 205-217. https://doi.org/10.1901/jaba.2005.98-04
Frampton, S. (2024). An overview of internal validity: Was it really the treatment that made a difference? Science in Autism Treatment, 21(08).
Ledford, J. R. (2018). No randomization? No problem: Experimental control and random assignment in single case research. American Journal of Evaluation, 39(1), 71-90. https://doi.org/10.1177/1098214017723110
Ledford, J. R., & Wolery, M. (2013). Procedural fidelity: An analysis of measurement and reporting practices. Journal of Early Intervention, 35(2), 173-193. https://doi.org/10.1177/1053815113515908
Preas, E. J., Halbur, M. E., & Carroll, R. A. (2024). Procedural fidelity reporting in the analysis of verbal behavior from 2007–2021. The Analysis of Verbal Behavior, 40(1), 1-12. https://doi.org/10.1007/s40616-023-00197-w
Vollmer, T. R., Sloman, K. N., & St. Peter Pipkin, C. (2008). Practical implications of data reliability and treatment integrity monitoring. Behavior Analysis in Practice, 1(2), 4–11. https://doi.org/10.1007/BF03391722
Reference for this article:
Rangel, C., & Frampton, S. F. (2025). Science Corner: Infidelity as a threat to internal validity. Science in Autism Treatment, 22(3).
Science Corner Articles related to Internal Validity:
- An overview of internal validity
- Science Corner: Maturation as a threat to internal validity
- Science Corner: History as a threat to internal validity
- Science Corner: Multiple treatment interference as a threat to internal validity
- Science Corner: Instrumentation as a threat to internal validity
Other Science Corner Articles:
- Some cautions on the exclusive use of standardized assessments in recovery-oriented treatment
- Role of replication in scientific validation
- Retraction of published research
- Regression to the mean: Expand your science knowledge
- ASD Intervention: How do we measure effectiveness?
- Treatment Integrity: Why it is important regardless of discipline
- Evaluating research
- “Verification” and the peer review process
- Interventions for individuals on the autism spectrum and how best to evaluate their effectiveness
Other ASAT Articles:
- Explaining decisions to use science-based treatments
- Becoming a savvy consumer/educator
- Research Synopsis: Third time’s the charm or three strikes you’re out? An updated review of the efficacy of dolphin‐assisted therapy for autism and developmental disabilities.
- Resources for the implementation of evidence-based practice
#Researchers #SavvyConsumer #Educators #Parents