This blog was written by Mary Burns, Education Development Center.
If you’ve spent any time in the technology in education world, particularly in developing country donor-funded work, you probably know the general lament about the many deficiencies of educational technology research: its paucity, its poverty and lack of purity.
Yet after four decades of technology use in schools, the “ed tech” research field has been diligently building an evidence base of what works. This ed tech research is increasingly abundant, of high quality and evidence based. The creation of global research partnerships like the EdTech Hub, the Abdul Latif Jameel Poverty Action Lab, and the research being generated around the 2023 Global Education Monitoring Report, which focuses on technology in education, bode well for further development and dissemination of high-quality research on educational technology, particularly in the Global South.
In fact, though problems and challenges remain in terms of the quality of ed tech research (and practices), the world of educational technology research is dynamic, complex, and nuanced. As one who uses research to design programmes, this is an exciting time in educational technology research and it’s time to address, update and provide some nuance around the dominant narrative around ed tech and ed tech research. This article attempts to do that by examining and unpacking many of the discrete assertions that constitute much of the dominant narrative on ed tech.
“We don’t need educational technology research”
Of course, governments and donors need rigorous, impartial research and accurate information to guide technology investments. Education systems need to incorporate data and evidence into planning and decision-making on a larger scale so that they can disseminate and scale evidence-based methods. Finally, implementing agencies and teachers need to know which technologies, and what practices with technology, hold the most promise in terms of reach and learning, particularly for traditionally underserved populations.
Now, few of us in the ed tech world publicly state that ed tech research is unnecessary. However, the actions of donors, implementers and governments suggest that we often ignore or eschew research even when we have access to it. To wit: Ministries of Educations determined to buy a suite of technology products to “reform” their education system yet unwilling to spend money on improving teacher quality; the continuous funding of technology integration training that contravenes research and evidence (the cascade approach, anyone?); the development of programmes based on very thin evidence (such as, “learning styles”); and the avoidance of evidence-based technology-based practices—using technology in concert with more directive, traditional instruction— that offend our worldview on instruction.
“The research that exists is from the ‘Global North;’ findings don’t apply in the Global South
It’s certainly true that most educational technology research comes from the Global North. It’s also true that within donor-funded programmes the research base is often so limited that what we do not know about effective technology use far outweighs what we do know.
However, “best practices” from the Global North are often universal regardless of context: good teaching is good teaching. Additionally, the research landscape has become far more diversified, thanks to initiatives like JPAL and EdTech Hub whose research focuses on the Global South. The internet, particularly the open access movement, has facilitated access to content and resources for universities in the Global South, via open access research journals and subscription services. The development of these online databases and journals means that university faculty in the Global South are not simply consuming more research; they are increasingly producing more of it, too.
“Technology does not improve student learning”
We are often quick to dismiss or exaggerate the benefits of ed tech. However, measuring its impact on learning is particularly problematic for a variety of reasons, three of which are noted here.
First, the convergence of technologies makes it difficult to untangle its effects. For example, in terms of one laptop per child programmes, are we looking at hardware, the software that is on the hardware, or the way computers are being used as part of learning?
Second, technology programmes can be initiated for vastly different purposes and intended student outcomes. One programme might equip all students with tablets for the purpose of tutoring or test preparation. Another might use computers to promote workforce skills such as digital literacy. A third might provide students with an Internet-connected laptop to provide uninterrupted learning in the case of pandemic or conflict. In each of these examples, the relevant study outcomes are different, and may be difficult to measure. Even though technology use may not always lend itself to empirical measurements on student learning, it still may have direct and indirect educational and personal benefits that make it no less valuable.
Finally, technology’s impact on learning is so difficult to measure because learning itself is so difficult to measure. Much research has consistently inveighed against student test scores as an adequate measure of learning in general; and the “inherent difficulties”, of attributing cause-effect relationships to technology or attempting to isolate the effects of technology as a distinct independent variable (p. 93-94).
“There’s no good research on (name your hardware, software)”
There often is a lack of a rigorous body of research on newer applications (e.g. personalised applications). Even if there is research on a certain technology intervention, a good deal of it may be of poor quality or funded and sponsored by the technology vendor who created it; studies may be small-scale and short-term. These poorly designed studies make it difficult to detect any effects from the technology.
Yet, there is abundant high-quality research on “older” technologies—interactive radio instruction, educational television for children, and intelligent tutoring systems (ITS) (and virtual tutoring) and their relationship to student learning.
In the case of tutoring, for example, there is a small but growing body of literature, based on randomised studies in Europe and the US, examining the positive impact of online tutoring on student learning. And ITS, which have been around for decades, enjoy a robust body of research supporting their use for tutoring. A meta-review of 50 studies on ITS concludes that they can ‘match the success’ of human tutoring (p. 67) while other research has found a ‘a significant advantage of ITS over teacher-led classroom instruction and non-ITS computer-based instruction’.
“But it’s not a Randomised Control Trial or a meta-evaluation…”
Though RCTs are the gold standard for producing reliable evidence; there’s a growing realisation that they are not perfect; that many questions can’t be studied using an RCT; and that they are time-consuming and expensive. Even with meta-evaluations, which are particularly powerful and provide valuable information, ‘no single one is capable of answering the overarching question of the overall impact of technology use on student achievement’ (p. 5).
Just because there isn’t a randomised trial, or meta-evaluation supporting a given technology does not mean that the technology is not effective. There may be no randomised trials, for example of a particular technology, but there may be overwhelming observational evidence; studies using secondary data-analysis; or rigorous qualitative studies, like ethnographies and case studies, showing technology’s effectiveness compared with other types of interventions (An example of the latter would be telenovelas on attitudes and behaviours).
Studies using secondary data analysis and sophisticated statistical analyses may not be as powerful for providing evidence on effectiveness, but these methodologies can answer important questions that may not be testable by RCTs. Rigorous qualitative studies, like ethnographies and case studies (the latter are deemed acceptable evidence by the US Department of Education’s What Works Clearinghouse), can provide a wealth of information concerning teacher or student experiences with technology, and the context in which they work.
“Research is the responsibility of researchers”
In fact, educational technology research is the responsibility of all of us working in donor-funded programmes. We need to increasingly update the narrative on ed tech research, particularly in the Global South. All of us in the world of education development have a role to play:
- Donors need to fund research that continues beyond the life of a project, that concentrates not just on the technology itself— but also on systems, stakeholders, conditions that influence its effective use.
- Practitioners need to respect research and become fluent in interpreting it and follow it—even when it contravenes our political views
- Researchers should ensure that they clearly explain methodologies in terms that are understandable, and provide utilisation-focused information that is practical, versus theoretical; accessible in terms of the cost to read and the quality of writing; and that includes the experiences of teachers and practitioners.
- Finally, technology companies should share the extensive data they collect on users. These data can be combined with self-reported data provided by technology users and by other external and internal research. This sharing of data would go far in enhancing our knowledge of the associations between technology and learning.