The Evidence-Based Practice Contradiction in Occupational Therapy

The term evidence-based practice (EBP) is now widely used, taught, and practiced. But should occupational therapy practitioners (OTPs) accept EBP as the ‘gold standard’ for how we choose OT interventions? What are the downsides to EBP for a health profession like OT? What can we do instead? What should we use instead?

The term evidence-based medicine appeared in the literature in 1991 and the health professions quickly adopted and accepted this term. The health professions then labeled it evidence-based practice and adopted many of the medicine’s definitions and perspectives. The medicine version is defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients”. Overall, EBP draws on research based on the best available evidence from science, such as randomized control trials (RCTs).

You may have heard that OT combines both art and science.

So how did EBP make its way into OT? The idea of EBP OT became popular in an Eleanor Clarke Slagle lecture by Holm in 2000. It was even titled, “Our Mandate for the New Millennium: Evidence-Based Practice”. One notable part of this article version published in AJOT by Holm was the state of the evidence (Levels I-V) in OT at the time, from 1995 to 1999. Holm noted that of all the articles published in OT literature during this time period, only one met level I criteria (systematic review, meta-analytic studies). Six were Level II (RCT), 21 were Level III (trials without randomization), 11 were Level IV (nonexperimental), and 41 were Level V (expert opinion). I am not sure of the state of the research currently, but I would likely guess that there is much more Level I’s and II’s than IV’s and V’s. But is this necessarily a good thing for the profession and our clients?

When I think of “good” evidence, one approach in EBP that seems to dominate and be very effective is Applied Behavior Analysis (ABA). Is it effective? I think it depends on who you ask. Parents and educators and other adults – very effective. For the child or now, the adult who has grown up and facing anxiety, depression, PTSD, and suicidal ideation from receiving ABA against their will? Not effective. Many OTs have strong opinions against ABA, as does (more importantly) the Autistic community as ABA does not promote neurodiversity but instead masking the Autistic person’s symptoms and personality so that they ‘fit in’ with society. This is a very good example (if you are not in favor of ABA) in my opinion, of why EBP is not the be-all-end-all for approaches in OT.

Research studies and journal articles, and therefore EBP is not a perfect system. There are also the behind-the-scenes factors of who funds the research. Larger organizations and forces who can afford to promote a practice likely lead to more research being conducted and therefore, published. There are also many biases such as publication bias at play as well.

“An autism researcher [Johhny Matson] lost two dozen papers to retraction in January, eight years after the publisher was made aware of potentially troubling editorial practices. Elsevier, the publisher, cited undisclosed conflicts of interest, duplicated methodology and a “compromised” peer-review process as reasons for the retractions.” These were articles for autism research by the way.

Even the peer review process can have its downsides. One notable case is of the ‘vaccine causing autism article’ being published and then retracted by the Lancet – 12 years later. Meanwhile, the effects of the public perception of vaccines for autism have affected society and public health outcomes, even to this day.

In Holm’s Slagle lecture, one question they address is “how do I become an evidence-based practitioner”? They provide advice on how to track down the best evidence with examples of databases that were prominent at the time to how to appraise the evidence to decide to use it in practice, but is this in the best interest of OT?

What are some of the broader concerns of universally accepting EBP in OT?

Let’s say a new grad OT who learned about NDT in school wants to implement it in practice. They search for NDT for OT in the literature and then they get hit with the evidence: ‘not-evidence based’, ‘ineffective’, ‘insufficient evidence’, etc. For example, Kollen and colleagues (2009) concluded that there was no evidence that the Bobath method was superior and only limited evidence was found for balance control. What does the new grad do then?

According to Hinojosa (2013), evidence-based medicine raises three concerns: (1) the adoption of a hierarchy of evidence from medicine (and the medical model), (2) assuming that RCTs provide the best and the only evidence to establish credibility (e.g., and ignoring say, expert opinion), and (3) assuming that internal validity is more important than external validity for a study.

Take RCTs for example. Are these better than qualitative studies that draw meaningful themes from the lived experiences of participants, e.g., the autistic adult who now has PTSD from ABA interventions? The research hierarchy says it is, that we should not even factor lower levels of research such as non-experimental research and expert opinion. In fact, there is no proof that one research design, e.g., RCT, is better than another.

An interesting alternative to the hierarchal model of research that might fit well with OT practice is one proposed by Tomlin and Borgetto in 2011, called the ‘research pyramid’ as shown here:

Tomlin, G., & Borgetto, B. (2011). Research pyramid: A new evidence-based practice model for occupational therapy. American Journal of Occupational Therapy, 65, 189–196. http://dx.doi.org/10.5014/ajot.2011.000828

What might be the most compelling reason not to universally accept EBP is how it does not translate well into real-world practice. Patients are different. Environments are different and dynamic. Occupations change and continue to evolve. And research (while moves at a much faster pace these days), is always a little behind due to the peer review process and the publication process.

Findings from RCT do not provide the best approaches to clinical practice where we have to make real-life decisions, but instead, answers and evidence to scientific questions from which they were based.10  An example is generalizability based on race. Historically, research such as that done for Autism did not include minorities in the United States, e.g., African Americans, Asians, and Hispanics.11  Even from a medicine standpoint, this evidence-based approach has led to many minority Autistic people being underdiagnosed by professionals based on a medical model (RCT and science) in the first place.12 

Even systematic reviews, which are rated as the highest ‘level of evidence’ can have their own biases. Who conducts the systematic review? What did they decide to include in their review? Which population did they look at and what interventions? What did they exclude? How accurately did they scrutinize each study? Even beyond the researchers, a systematic review only looks at the research that is actually published (not currently in review, rejected by publishers but potentially good research, publication biases). What’s even more interesting is that separate system reviews can draw different conclusions, leading to the question of how objective is it or is their subjectivity at play?13  There definitely are biases that are hard to account for, despite the best of intentions, e.g., subconscious preference for an approach.

Another thing that systematic reviews and RCTs do not account for is external validity – factors such as the context influencing the study and being the actual cause of the observed change. In the Occupational Therapy Practice Framework (OTPF), we place a high value on external validity concepts such as the client’s context, beyond the client themself such as time and space. And in a profession such as OT where occupation and its participation are so rich, it can be hard to capture it in RCTs. An example is the therapeutic use of self. Are we measuring it? Most likely not (see my YouTube Video). Yet we can assume proper use of this approach can have immense positive benefits on the outcomes and goals of our clients.

What should we use instead of EBP? You may have heard of evidence-informed practice.14  This approach synthesizes and factors all levels of evidence available with equal weight and consideration for practice. I think this is the way – at least in terms of OT practice.

And I get it, it can be tough, such as being a new grad with not a lot of experience of what works or doesn’t and not having the networking and mentorship. This is why I created resources on this website to help you approach and practice occupational therapy from an evidence-informed practice standpoint. Researching takes skill and time and not everyone is interested in it or has the time to do it. This time could be spent treating clients or eating your lunch. By joining the OT Dude Club, you will have access to these evidence-informed resources across the three major practice settings: pediatrics, adults, and mental health.


Sources

  1. Montori, V. M., & Guyatt, G. H. (2008). Progressa in evidence-based medicine. JAMA, 300, 1814–1816. http://dx.org/doi:10.1001/jama.300.15.1814
  2. Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312, 71–72. http://dx.doi.org/10.1136/bmj.312.7023.7
  3. Holm, M. B. (2000). Our mandate for the new millennium: Evidence-based practice (Eleanor Clarke Slagle Lecture). American Journal of Occupational Therapy, 54, 575–585. http://dx.doi.org/10.5014/ajot.54.6.575
  4. https://www.spectrumnews.org/news/prolific-autism-researcher-has-two-dozen-papers-retracted
  5. Eggertson, L. (2010). Lancet retracts 12-year-old article linking autism to MMR vaccines. Canadian Medical Association. Journal, 182(4), E199.
  6. Kollen, B. J., Lennon, S., Lyons, B., Wheatley-Smith, L., Scheper, M., Buurke, J. H., … Kwakkel, G. (2009). The effectiveness of the Bobath concept in stroke rehabilitation: What is the evidence? Stroke, 40, e89–e97. http://dx.doi.org/10.1161/STROKEAHA.108.533828
  7. Hinojosa, J. (2013). The evidence-based paradox. The American Journal of Occupational Therapy, 67(2), e18-e23.
  8. Goldenberg, M. J. (2009). Iconoclast or creed? Objectivism, pragmatism, and the hierarchy of evidence. Perspectives in Biology and Medicine, 52, 168–187. http://dx.doi.org/10.1353/pbm.0.0080
  9. Worrall, J. (2007). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2, 981–1022. http://dx.doi.org/10.1111/j.1747-9991.2007.00106.x
  10. Pedersen, K. M. (2004). Randomised controlled trials in drug policies: Can the best be the enemy of the good? In I. S. Kristiansen & G. H. Mooney (Eds.), Evidence based medicine: In its place (pp. 124–140). London: Routledge.
  11. Mandell, D. S., Wiggins, L. D., Carpenter, L. A., Daniels, J., DiGuiseppi, C., Durkin, M. S., … & Kirby, R. S. (2009). Racial/ethnic disparities in the identification of children with autism spectrum disorders. American journal of public health, 99(3), 493-498.
  12. Begeer, S., Bouk, S. E., Boussaid, W., Terwogt, M. M., & Koot, H. M. (2009). Underdiagnosis and referral bias of autism in ethnic minorities. Journal of autism and developmental disorders, 39, 142-148.
  13. Shrier, I., Boivin, J. F., Platt, R. W., Steele, R. J., Brophy, J. M., Carnevale, F., … Rossignol, M. (2008). The interpretation of systematic reviews with meta-analyses: An objective or subjective process? BMC Medical Informatics and Decision Making, 8, 19. http://dx.doi.org/10.1186/1472-6947-8-19
  14. Glasziou, P. (2005). Evidence based medicine: Does it make a difference? Make it evidence informed practice with a little wisdom. BMJ, 330, 92, discussion 94. http://dx.doi.org/10.1136/bmj.330.7482.92-a