How do you know whether an edtech product is effective in delivering its intended outcomes? As the number of edtech products has ballooned in the past five years, educators—and parents—seek information to help them make the best decision. Companies, unsurprisingly, are happy to help "prove" their effectiveness by publishing their own studies, sometimes in partnership with third-party research groups, to validate the impact of a product or service.
But oftentimes, that research draws incorrect conclusions or is “complicated and messy,” as Alpha Public Schools’ Personalized Learning Manager Jin-Soo Huh describes it. With a new school year starting, and many kids about to try new tools for the first time, now is a timely moment for educators to look carefully at studies, scrutinizing marketing language and questioning the data for accuracy and causation vs. correlation. “[Educators] need to look beyond the flash of marketing language and bold claims, and dig into the methodology,” Huh says. But it’s also up to companies and startups to question their own commissioned research.
To help educators and companies alike become the best critics, here are a few pieces of advice from administrators and researchers to consider when reviewing efficacy studies—and deciding whether or not the products are worth your time or attention.
For Educators
#1: Look for the “caveat statements,” because they might discredit the study.
According to Erin Mote, co-founder of Brooklyn Lab Charter School in New York City, one thing she and her team look for in studies are “caveat statements,” where the study essentially admits that it cannot fully draw a link between the product and an outcome.
“[There are] company studies that can't draw a definitive causal link between their product and gains. The headline is positive, but when you dig down, buried in three paragraphs are statements like this,” she tells EdSurge, pointing to a Digital Learning Now study about math program Teach to One (TtO):
Mote also describes her frustration with companies that call out research studies as a marketing tactic, such as mentioning both studies and the product within a brief, 140-character Tweet or Facebook post—even though the study is not about the product itself, as in the Zearn Tweet below. “I think there is danger in linking studies to products which don't even talk about the efficacy of that product,” Mote says, calling out that companies that do this effectively co-opt research that is unrelated to their products.
#2: Be wary of studies that report “huge growth” without running a proper experiment or revealing complexities in the data.
Methodology matters. According to Aubrey Francisco, Director of Research at Digital Promise, something consumers should look for is “whether or not the study is rigorous,” specifically by asking questions like the following four:
- Is the sample size large enough?
- Is the sample size spread across multiple contexts?
- Are the control groups mismatched?
- Is this study even actually relevant to my school, grade, or subject area?
Additionally, what if a company claims massive growth as indicated by a study, but the data in the report doesn’t support those claims?
Back in the early 2000s, John Pane and his team at the RAND Corporation set out to demonstrate the effectiveness of Carnegie Cognitive Tutor Algebra. Justin Reich, an edtech researcher at Harvard University, wrote at length about the study, conceding that the team “did a lovely job with the study.”
However, Reich pointed out that users should be wary of claims made by Carnegie Learning marketers that the product “doubles math learning in one year” when, as Reich describes, “middle school students using Cognitive Tutor performed no better than students in a regular algebra class.” He continues:
Here’s another example: In a third-party study released by writing and grammar platform NoRedInk involving students at Shadow Ridge Middle School in Thornton, CO, the company claims that every student who used NoRedInk grew at least 3.9 language RIT (student growth) points on the popularly-used MAP exam or—by equivalence—at least one grade level, demonstrated in a graph (shown below) on the company’s website. But upon further investigation, there are a few issues with the bar graph, says Alpha administrator Jin-Soo Huh.
While the graph shows that roughly 3.9 RIT points equate to one grade level of growth, there’s more to the story, Huh says. That number is the growth expected for an average student at that grade level, but in reality, this number varies from student to student: “One student may need to grow by 10 RIT points to achieve one year of typical growth, while another another student may just need one point,” Huh says. The conclusion: these NoRedInk student users who grew 3.9 points “may or may not have hit their yearly growth expectation.”
Additionally, one will find another “caveat” statement on Page 4 of the report, which reads: “Although answering more questions is generally positively correlated with MAP improvement, in this sample, there was not a statistically significant correlation with the total number of questions answered.”
According to Jean Fleming, NWEA’s VP of Communications, “NWEA does not vet product efficacy studies and cannot offer insight into the methodologies used on studies run outside our organization” when it comes to MAP testing. Hence, all the more reason for users to be aware of potential snags.
For Companies
#1: Consider getting your “study” or “research” reviewed.
No one is perfect, but according to Alpha administrator Jin-Soo Huh, “Edtech companies have a responsibility when putting out studies to understand data clearly and present it accurately.”
To help, Digital Promise launched on Aug. 9 an effort to help evaluate whether or not a research study meets its standard of quality. (Here are a few studies that the nonprofit says pass muster, listed on DP's "Research Map.") Digital Promise and researchers from Columbia Teachers College welcome research submissions between now and September from edtech companies in three categories:
- Learning Sciences: How developers use scientific research to justify why a product might work
- User Research: Rapid turnaround-type studies, where developers collect and use information (both quantitative and qualitative) about how people are interacting with their product
- Evaluation Research or Efficacy Studies: How developers determine whether a product has a direct impact on learning outcomes
#2: Continue conducting or orchestrating research experiments.
Jennifer Carolan, a teacher-turned-venture capitalist, says both of her roles have required her to be skeptical about product efficacy studies. But Carolan is also the first to admit that efficacy measurement is hard, and needs to continue happening:
When asked about the state of edtech research, Francisco responds that it’s progressing, but there’s work to be done. “We still have a long way to go in terms of being able to understand product impact in a lot of different settings,” she writes. However, she agrees with Carolan, and adds that the possibility of making research mishaps shouldn’t inhibit companies from conducting or commissioning research studies.
“There’s a lot of interest across the community in conducting better studies of products to see how they impact learning in different contexts,” Francisco says.