Efficacy in Educational Technology: Guidelines for Evaluating What Really Works
Many educators know the struggle of helping students identify reliable sources of information. At a time when virtually any information can be found online—whether true or not—seeking out verified, credible sources can feel like a Herculean task. Literacy educators in particular work hard to help their students pinpoint reliable and up-to-date resources.
It's ironic, then, that educators frequently find themselves without the necessary resources to evaluate their teaching tools. When shopping for educational technology (edtech), educators want to know whether the technology is grounded in up-to-date research, whether it's proven to be an effective teaching tool, and whether it will address their students' unique needs. Unfortunately, it can be difficult to find data that address all of these concerns. For example, edtech may be touted as "research-based," but what does that really mean? Often, this terminology is used to indicate that the edtech was developed to address key skills identified in current education research. However, educators also want to know whether the edtech is actually an effective teaching tool. After all, they won't want to spend money and time on classroom implementation without evidence to suggest that their students' learning and performance will likely improve.
In short, the "research-proven" label provides the scientific basis to assure educators and administrators of edtech effectiveness. Edtech proven to work well in the classroom will have objective, independent reviews and studies showing that it makes a difference in student learning. Including control groups, comparing pre- and post-testing, using standardized and norm-referenced assessments, and analyzing data are all viable components in demonstrating that edtech is research-proven.
Even after identifying research-proven edtech, educators may still have concerns about whether it will be a good fit for their students' particular needs. For example, a school with a high population of English Language Learners may wonder whether a literacy program is effective for ELL students, or a special educator may be unsure of edtech adaptation capabilities for students with special needs.
Educators and researchers have already begun working to address these school- and district-specific concerns. In a 2015 Brookings report titled Using Research to Improve Education Under the Every Student Succeeds Act, author Mark Dynarski proposed a two-pronged approach to evaluate education tools. In the first stage, educators look for efficacy research to determine which programs were shown to be research-proven. In the second stage, educators and districts work closely with researchers to identify how to implement proven approaches in a way that meets their schools' unique needs. This "implementation science" strives to assist schools that failed to meet targeted goals or aren't satisfied with their own outcomes using research-proven edtech, helping them work directly with researchers and make a plan for implementation. According to Dynarski, "Using a two-stage model for generating evidence on effective, implementable interventions will help put experiments and improvement science into balance."
It is certainly possible that Individual schools and districts might not have the resources to consult an implementation scientist—at least, not yet. Fortunately, there are many ways that educators can use their own data to determine how research-proven edtech can be implemented effectively for their students. In What is Scientifically-Based Research?, the National Institute for Literacy affirmed that educators often use scientific thinking throughout their work in the classroom. "Teachers use experimental logic when they plan for instruction: they evaluate their students' previous knowledge, construct hypotheses about the best methods for teaching, develop teaching plans based on those hypotheses, observe the results, and base further instruction on the evidence collected," the institute noted. Let's look at how educators can use their own research skills and scientific thinking to choose effective, research-proven edtech that meets the particular needs of their students.
First, when reviewing a prospective edtech, determine whether it is research-proven by asking the following questions:
Has the edtech been reviewed by other educators?
Were efficacy studies conducted? Did the studies use experimental methods, such as including a control group?
Were standardized assessments used to evaluate whether the edtech helped students improve their skills? Were pre- and post-tests administered?
Were statistical tests conducted to show that the edtech is effective?
Did the efficacy studies undergo external, peer-reviewed evaluations?
Were the studies published in scientific journals?
After determining that a particular edtech is research-proven, consider whether it meets the needs of your students:
What literacy skills are my students developing? Where do they need support?
What is the time commitment and monetary cost of the edtech? Does our school have the resources to implement this effectively?
Does the edtech provider offer sufficient training services for effective implementation?
How can I expect student performance to improve using the edtech? What measuring tools will show me that performance is improving?
Literacy educators know the importance of identifying reliable sources and applying their own critical thinking. After all, educators are no strangers to scientific thinking, as nearly everything about their jobs could be described in very scientific terms: assessing, hypothesizing, implementing, and assessing again. When deciding upon edtech, educators can use their research skills to identify research-proven approaches, and their critical thinking skills to evaluate how the edtech could be implemented most effectively.
Featured White Paper:
When assessments are properly administered and integrated into instruction, the resulting data can provide valuable information. To be effective, though, teachers and administrators must first understand the purpose of these assessments since they each yield different kinds of data. Read the white paper by Lexia’s Chief Learning Officer, Dr. Liz Brooke, to learn about the types of assessments and how to create a purpose-driven assessment plan.