Teachers need to be scientists themselves, experimenting and measuring the impact of powerful AI products on education. Hyoung Chang via Getty Images

American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.

I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that’s about to wash over schools and society.

At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.

New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.

It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.

We’ve been wrong and overconfident before

I started teaching high school history students to search the web in 2003. At the time, experts in library and information science developed a pedagogy for web evaluation that encouraged students to closely read websites looking for markers of credibility: citations, proper formatting, and an “about” page. We gave students checklists like the CRAAP test – currency, reliability, authority, accuracy and purpose – to guide their evaluation. We taught students to avoid Wikipedia and to trust websites with .org or .edu domains over .com domains. It all seemed reasonable and evidence-informed at the time.

The first peer-reviewed article demonstrating effective methods for teaching students how to search the web was published in 2019. It showed that novices who used these commonly taught techniques performed miserably in tests evaluating their ability to sort truth from fiction on the web. It also showed that experts in online information evaluation used a completely different approach: quickly leaving a page to see how other sources characterize it. That method, now called lateral reading, resulted in faster, more accurate searching. The work was a gut punch for an old teacher like me. We’d spent nearly two decades teaching millions of students demonstrably ineffective ways of searching.

Today, there is a cottage industry of consultants, keynoters and “thought leaders” traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.

There is a better approach than making overconfident guesses: rigorously testing new practices and strategies and only widely advocating for the ones that have robust evidence of effectiveness. As with web literacy, that evidence will take a decade or more to emerge.

But there’s a difference this time. AI is what I have called an “arrival technology.” AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard – it crashes the party and then starts rearranging the furniture. That means schools have to do something. Teachers feel this urgently. Yet they also need support: Over the past two years, my team has interviewed nearly 100 educators from across the U.S., and one widespread refrain is “don’t make us go it alone.”

3 strategies for prudent path forward

While waiting for better answers from the education science community, which will take years, teachers will have to be scientists themselves. I recommend three guideposts for moving forward with AI under conditions of uncertainty: humility, experimentation and assessment.

First, regularly remind students and teachers that anything schools try – literacy frameworks, teaching practices, new assessments – is a best guess. In four years, students might hear that what they were first taught about using AI has since proved to be quite wrong. We all need to be ready to revise our thinking.

Second, schools need to examine their students and curriculum, and decide what kinds of experiments they’d like to conduct with AI. Some parts of your curriculum might invite playfulness and bold new efforts, while others deserve more caution.

In our podcast “The Homework Machine,” we interviewed Eric Timmons, a teacher in Santa Ana, California, who teaches elective filmmaking courses. His students’ final assessments are complex movies that require multiple technical and artistic skills to produce. An AI enthusiast, Timmons uses AI to develop his curriculum, and he encourages students to use AI tools to solve filmmaking problems, from scripting to technical design. He’s not worried about AI doing everything for students: As he says, “My students love to make movies. … So why would they replace that with AI?”

It’s among the best, most thoughtful examples of an “all in” approach that I’ve encountered. I also can’t imagine recommending a similar approach for a course like ninth grade English, where the pivotal introduction to secondary school writing probably should be treated with more cautious approaches.

Third, when teachers do launch new experiments, they should recognize that local assessment will happen much faster than rigorous science. Every time schools launch a new AI policy or teaching practice, educators should collect a pile of related student work that was developed before AI was used during teaching. If you let students use AI tools for formative feedback on science labs, grab a pile of circa-2022 lab reports. Then, collect the new lab reports. Review whether the post-AI lab reports show an improvement on the outcomes you care about, and revise practices accordingly.

Between local educators and the international community of education scientists, people will learn a lot by 2035 about AI in schools. We might find that AI is like the web, a place with some risks but ultimately so full of important, useful resources that we continue to invite it into schools. Or we might find that AI is like cellphones, and the negative effects on well-being and learning ultimately outweigh the potential gains, and thus are best treated with more aggressive restrictions.

Everyone in education feels an urgency to resolve the uncertainty around generative AI. But we don’t need a race to generate answers first – we need a race to be right.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Justin Reich, Massachusetts Institute of Technology (MIT)

Read more:

Justin Reich has received funding from Google, Microsoft, Apple, the Bill and Melinda Gates Foundation, the Chan/Zuckerberg Initiative, the Hewlett Foundation, education publishers, and other organizations that are involved in technology and schools.