Brookings warns AI risks to students now outweigh benefits, urges immediate action
The Brookings Institution’s Center for Universal Education has published a new global research report warning that the current use of generative artificial intelligence in education is putting students at risk, with harms now outweighing benefits.
The report, A new direction for students in an AI world: Prosper, prepare, protect, draws on a yearlong “premortem” study designed to anticipate risks before they become embedded in education systems. The authors argue that decisions made now will determine whether AI strengthens learning or erodes it.
“At this point in its trajectory, the risks of utilizing AI in children’s education overshadow its benefits,” the authors write.
The study was led by the Brookings Institution’s Center for Universal Education, a policy center focused on global education systems and skills development. Rather than assessing AI after widespread adoption, the research was designed to identify potential failure points early.
The authors conducted interviews, focus groups, and consultations with 505 students, teachers, parents, education leaders, and technologists across 50 countries. The research also includes a review of more than 400 academic and policy studies, alongside a Delphi panel of 21 experts.
Brookings describes the effort as a “premortem” intended to surface risks before they become systemic. The report states that feedback leaned negative overall, with 56 percent of responses highlighting harms compared with 44 percent citing benefits.
Foundational learning is where risks are emerging first
The report distinguishes between two paths: AI-enriched learning and AI-diminished learning.
Well-designed tools, used as part of a pedagogically sound approach, can support learning. However, Brookings finds that overreliance on generative AI is already affecting students’ capacity to learn independently, build relationships, and develop social and emotional skills.
“These risks undermine children’s foundational development,” the authors write, noting that weakened trust between students and teachers can prevent potential benefits from being realized.
The report emphasizes that the nature of the risks differs from the nature of the benefits. While benefits tend to be additive or supplemental, risks can disrupt the conditions required for learning to occur at all.
Prosper, Prepare, Protect frames Brookings’ recommendations
To address these challenges, Brookings proposes three pillars for action: Prosper, Prepare, and Protect. The framework is intended for governments, education systems, families, and technology companies.
Prosper focuses on how AI is used inside learning environments. The report calls for AI tools that support practice, feedback, and exploration rather than replacing thinking. It stresses that AI should “teach, not tell,” and be deployed as part of intentional instructional design rather than as a shortcut.
Prepare centers on system readiness. Brookings highlights the need for educator professional learning, student AI literacy, and long-term planning. The report argues that without preparation, AI adoption will continue to be uneven, reactive, and misaligned with learning goals.
Protect addresses safeguards. This pillar focuses on student safety, privacy, emotional well-being, and cognitive development. The authors call for ethical AI design, stronger governance, and clearer roles for adults in guiding AI use.
Across all three pillars, the report urges action within a defined timeframe. “We urge all relevant actors to identify at least one recommendation to advance over the next three years,” the authors write.
Brookings is explicit that the trajectory of AI in education is not fixed. The report argues that AI has the potential to either enrich or diminish learning depending on choices made now.
“The actions we take—or fail to take—now will determine whether AI becomes education’s greatest asset for student learning and wellbeing or its most profound threat to student flourishing,” the authors write.