Learning Disability Interventions: Making Sense of The Evidence

Introduction

Effective individualized treatment is the prescription for any child diagnosed with a learning disability (LD). However, choosing the right treatment can be a daunting and confusing process. Controversies with respect to the efficacy of many LD interventions abound. How does an “intervention consumer” make sense of the vast array of treatments that are available? As with any potential purchase it is always wise to investigate before buying. To be an informed LD treatment consumer means evaluating the scientific validity of a treatment before accepting claims of efficacy.

The importance of being an informed LD intervention consumer

Frustrated parents of children with untreated LD are especially vulnerable to empty promises of miracle cures and treatment breakthroughs. Desperate for solutions, some may impulsively choose controversial untested treatments. Uninformed choices not only waste time, energy and possibly finances but can potentially subject already overburdened children to unnecessary frustration and failure. Although there will always be uncertainties associated with any treatment, carefully weighed choices will reduce the risk of wasted resources, disappointment and learning setbacks.

The efficacy of available LD treatments

Swanson points out that we are biased by the publication of only positive outcomes in intervention research (Swanson, 2000). This practice leads to the impression that all treatments work and are equally effective. Unfortunately the fact that an LD intervention is available to the public does not mean that it has been proven or even tested. As well, popularity and even widespread use are not valid indicators of efficacy. In the absence of any formal regulations monitoring the value of available LD treatments, even unsubstantiated treatments can be openly promoted and sold to the public.

Understanding claims of proof

Consumers should not be expected to intuitively grasp the notion of scientific proof. The requirements for the designation “evidence-based” are far more involved and stringent than is generally assumed. Further, the procedures and criteria of the scientific method, which forms the basis of proof, are simply not common knowledge. To recognize this is the first step toward learning to distinguish valid from unfounded LD treatments.

It is not surprising that false or misleading claims about LD treatments are regularly and successfully marketed to the general public. Those who make invalid allegations depend on consumers’ lack of research expertise for their success. The less consumers understand about scientific validity, the easier it is to sell unsubstantiated treatments as proven interventions. Unless consumers make deliberate efforts to become informed they will be ill equipped to judge the validity of LD interventions and have no basis with which to make sound treatment choices.

To legitimately promote a treatment as effective requires proof. If there is no mention of testing, research, or evidence, it is highly unlikely that the intervention in question has been subjected to any kind of scientific inquiry. Without research support, allegations of treatment validity remain unsubstantiated and should be viewed with caution and even skepticism. This is not to say that interventions without an evidence-base are necessarily ineffective. It simply means that claims of treatment efficacy should be reserved for interventions that have been subjected to proper scientific investigation. Unfortunately, this is often not the case with treatment promotions regularly being made in the absence of proof.

Beware of subjective reports

Testimonials, anecdotes and personal accounts although sometimes compelling do not constitute scientific evidence. Even if accurate, subjective reports are based on individual cases that do not generalize to other situations. Stories of treatment success are of value if they provide hope and direct consumers to investigate new interventions but they do not qualify as proof and should never be thought of as such.

How to determine if an intervention has research support

The terms “research”, “evidence”, “support” tend to be used loosely and sometimes haphazardly. In reality there is good research and bad research. More often than not research does not meet the standards of proper scientific investigation. Alleged evidence might be scientific or anecdotal, systematically determined or casually gathered. Even among valid research studies, only a small percentage provide decisive information about treatment efficacy.

The first step in evaluating any claim of research support is to locate the source of the alleged evidence. By whom, when, and how was the information obtained? If there truly is evidence supporting the effectiveness of an intervention, it should be made available to the consumer. More often than not, simply locating the source of the research (or finding that it does not exist) will be enough to determine whether or not claims of support are justified. If there is systematic research underlying a claim of proof, reference will be made to a particular study or studies. Research published in academic journals will be identified by a reference which lists the author(s), date, article title, journal title, volume, and page number of the research study.

The publication of research in a peer reviewed journal is one indication of its quality and means that the research has been reviewed and scrutinized by a panel of experts in the field. While publication in a peer reviewed journal does not guarantee scientific rigor, an absence of peer reviewed research is a very good indication that any allegations of proof are false.

Not all research findings qualify as proof

Unfortunately, the majority of published intervention studies lack scientific rigor. In a comprehensive synthesis of 30 years of learning disabilities intervention research, Swanson and colleagues examined evidence from 900 different LD intervention studies. Of these 900 studies, only 25% met the author’s criteria for inclusion in the analysis. Further, of the 25% included in the synthesis, only 5% met the high standards of proper research methodology (Swanson, et al., 1999). The results of this review highlight the complexities of scientific research and the difficulties associated with establishing proof.

Clearly, treatments should not be regarded as valid simply because published studies have been cited. Second-hand accounts of research findings are only interpretations of actual results and are frequently biased, misleading or altogether incorrect. In the process of interpretation, results can be inadvertently or intentionally misrepresented. In order to determine the actual outcomes of an intervention study it is advisable to consult the original source of the cited research whenever possible.

The original research source, although more accurate and reliable than secondary interpretations, is often more difficult to understand. All experimental studies use some form of statistical analysis which can be incomprehensible to non-experts. Indeed researchers themselves spend years studying and learning about the statistical analysis of data. It is not recommended or at all necessary to become an expert in statistical analysis to understand claims of intervention efficacy. A review of the introduction and discussion sections of the research report will be sufficient to get a general sense of any significant findings and their interpretation by the authors. Because the research has been subjected to peer review, definitive claims of treatment efficacy will only be made if they are justified by the results.

The scientific method

Investigators use several kinds of research to further our understanding of LD interventions. Three common designs include descriptive analyses, large-scale field studies, and experimental designs. All of these approaches contribute to our understanding of LD interventions but not all can provide us with proof of treatment efficacy. Evidence for treatment validity can only be obtained through the use of experimental designs which follow the scientific method.

When intervention research adheres to the standards of the scientific method, valid claims of efficacy can be made with a minimum of bias. Using the scientific method, researchers first form a hypothesis or idea which is then formulated as a prediction (e.g. “treatment X will help children with LD learn to read”). An experiment is then designed to test this prediction. The nature of the treatment, how it will be implemented, and the means for evaluating treatment efficacy are all objectively defined and described in detail prior to conducting the intervention. Pre and post intervention measures are obtained with the use of objective measures.

The most credible intervention studies always control for alternative explanations of the research findings. A control group is composed of individuals who are similar to participants in the treatment group on most important measures such as age, type of disability, etc. However, the control group does not receive the treatment. Without a comparison control group there would be no way of knowing whether the treatment or some other factor caused observed changes in behavior or performance.

Once an experiment has been conducted, statistical tests are carried out to determine if any treatment effects are scientifically meaningful or simply due to chance. If statistically significant results are found, the research must then be subjected to scrutiny by experts in the field before being accepted for publication in peer reviewed academic journals. Finally, for a finding to be considered well established, the research must be confirmed through replication by independent researchers in the field.

Successful research does not equal successful implementation

Once a particular intervention is shown to be effective through properly controlled experimentation, the process of implementation can begin. Implementation involves transferring what has been established in a controlled research setting to the everyday environment. The conditions of carefully controlled experimentation can be quite different from real life circumstances. The very things that are controlled for during intervention studies form a critical part of real life and cannot be ignored during treatment implementation. Challenges associated with transferring research findings to the real world make the process of implementation perhaps as daunting as the process of proving treatment validity.

Conclusions

Obtaining scientific proof of LD treatment efficacy, replicating valid findings, and finally implementing proven interventions is an extremely lengthy, arduous and costly process. This fact coupled with the intense demand for effective LD treatments has led to the proliferation of a myriad of unsubstantiated LD interventions.

To be an informed LD intervention consumer means learning to distinguish evidence-based treatments from unsubstantiated claims of treatment efficacy. Fortunately, there are clearly defined steps that can be taken to verify any allegations of proof. The general recommendation for the LD intervention consumer is to proceed with caution, become informed, and scrutinize any claims of efficacy. An awareness of the complexities of intervention research will perhaps encourage LD consumers to have patience when making important decisions regarding LD treatments.

References

Hoagwood, K., Burns, B.J., Kiser, L., Ringeisen, H. & Schoenwald S.K. (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52(9), 1179-1189.

Swanson, H. L., Hoskyn M., & Lee, C. M. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes. New York: Guilford Press.

Swanson, H.L. (2000). Issues facing the field of learning disabilities. Learning Disability Quarterly, 23, 37-50.

Reprinted with permission from Community Health Systems Resource Group
Hospital for Sick Children. April 15th, 2002

Copyright © 2002 The Hospital for Sick Children, All rights reserved. You are free to duplicate this document but we request that you acknowledge The Hospital for Sick Children copyright.