Understanding ESSA: What does Evidence-Based Really Mean?

Understanding ESSA: What does Evidence-Based Really Mean?

One clear shift in thinking between the 2015 Every Student Succeeds Act (ESSA) and its predecessor, No Child Left Behind, is this: The ESSA prompts schools to use evidence-based interventions with students, particularly if those students qualify for Title 1 funding (intended to provide extra support to students living in poverty, according to federal guidelines). While the No Child Left Behind federal law first enacted in 2001 asked that schools use “scientifically based” interventions, the ESSA has taken it a step further by encouraging schools and districts to put money and effort into programs and strategies that are backed by actual evidence, not just theory.


This may seem like a fine point of distinction, but it comes down to a newly expressed hope that anyone affiliated with students—including schools, educators, districts, and partner organizations—will adopt proven strategies for providing academic support or interventions. (The government, even under ESSA, does not mandate any specific approach or program.) To explain this further, the federal Department of Education has provided a detailed, four-tiered evaluation guide designed to inform evidence-based decision-making.


So, what exactly do "evidence-based" and "proven" results mean when it comes to the ESSA? Here is a brief overview, according to the four-tiered method of evaluation as outlined in the Department of Education’s 2016 guide:
 

  1. Strong evidence 
    This is the highest ranking for any intervention, product, or program, and is dependent upon the inclusion of “at least one well-designed and well-implemented experimental study.” This rigorous framework includes the use of a “random control trial,” meaning the results will not be driven solely by a pre-selected sample group. The Department of Education also emphasizes that any intervention or strategy that meets this standard should have “statistically significant and positive” outcomes for students.
     

  2. Moderate evidence
    This is the second-highest bar for any intervention, product, or program, according to the government’s ESSA guide. This evaluation category requires that any intervention be subject first to “at least one well-designed and well-implemented quasi-experimental study.” Quasi-experimental means there is no random control group, unlike with the “strong evidence” category. (The ESSA guide states that the department uses evidence standards taken from the What Works Clearinghouse, an education research outfit.)
     

  3. Promising evidence 
    This is an evaluation framework that requires the use of “at least one well-designed and well-implemented correlational study with statistical controls for selection bias.” The requirements for this standard are less rigorous and do not mandate the use of a large sample group, although the expectation is still that any study will result in “statistically significant and positive” outcomes for students.
     

  4. Demonstrates a rationale 
    This category applies to theories of action or “well-defined logic models” that education professionals are in the process of supporting with further research. The root of this category is that any theory of action or intervention will be supported by “promising evidence” at some point.


For educators hoping to abide by the ESSA’s new framework, this four-tiered evaluation system may be just a starting point. Although there are websites out there—such as the Evidence for ESSA site run by Johns Hopkins University’s Center for Research and Reform in Education—that attempt to walk school leaders through the evaluation process, the Evidence for ESSA guide uses evaluation standards that differ from those provided by the federal government. This means some studies that meet the ESSA’s rigorous guidelines do not meet the Johns Hopkins criteria. (It should be noted that the site for the latter also includes reviews of interventions created at Johns Hopkins.)

There are other comprehensive guides out there, such as this one from the Florida State University’s Center for Reading Research. This extensive document is geared toward local education agencies (LEAs) and state education agencies (SEAs), the two main decision-making bodies that tend to control what programs and strategies get recommended for schools. Although the guide is long, it includes five clear categories to consider when trying to meet the ESSA’s preference for proven interventions.


Here is a quick look at the Florida State approach:
 

  • Identify local needs. This is a reminder to conduct a needs-assessment (as advised by ESSA) first in order to figure out the most pressing needs or issues on a local level.
     

  • Select relevant, evidence-based interventions. Once local needs have been identified, it’s time to find the right interventions. Using the ESSA’s guidelines, the strongest interventions will have been put to the test already through “at least one well-designed and well-implemented” study that results in overwhelmingly positive outcomes for students. Finding what works and what meets these expectations may take time and research, using guides like Florida State University's or this one from the University of Connecticut. The Connecticut guide offers a list of where to look for worthy interventions, including the National Institute of Health and the National Institute for Literacy, among others.
     

  • Plan for intervention. This involves identifying how progress will be measured after an intervention has been selected. Of course, states do have to adhere to standardized test-based accountability systems, but these should be paired with frequent, site-specific assessments. On a basic level, it comes down to this: Is the intervention working? Why or why not?
     

  • Implement. The plan for intervention should also include implementation specifics such as whether a school’s infrastructure is adequate or whether technology needs are up to date. Then, guided questions surrounding how well the intervention is working are necessary to provide that “local” look at whether the strategies or programs are a good fit for the targeted need or audience.
     

  • Examine and reflect. This step is part of the continuous improvement and evaluation process that composes any strong professional development program. Making time to stop and assess, to ask questions and revisit goals, or perhaps to re-adjust goals should be part of any intervention plan.

While it is encouraging to know that the ESSA has put a new emphasis on the importance of using evidence-based strategies, it is largely up to individual states, schools, and districts to figure out which interventions meet the ESSA criteria and which do not. Fortunately, there are quite a few worthwhile guides out there to help educators navigate the changing intervention landscape.
 


LEXIA LEARNING RESEARCH
Evidence-Based, Research-Proven

Lexia programs are proven to improve learning outcomes required by federal mandates under ESSA. Lexia’s rigorous research portfolio of studies published over the past 15 years meets the highest levels of evidence under ESSA needed to evaluate instructional programs. Lexia now has 15 externally-reviewed research studies that meet the standards of evidence under ESSA: 8 strong, 3 moderate, and 4 promising.  

EXPLORE LEXIA RESEARCH


 

Share This: 
 

____________________________________

Featured White Paper:


6 Evidence-Based Instructional Strategies to Boost English Learner Achievement 

English Learners are one of the fastest-growing sub-groups among the school-aged population. Read the white paper by Lexia's Chief Education Officer, Dr. Liz Brooke, CCC-SLP, to learn about the unique needs of ELs as well as 6 evidence-based instructional strategies that help boost academic achievement for this growing population.

read the white paper
Resource Type: