Stanford expands AI research repository for K–12 as evidence base grows
Updated database now tracks more than 1,200 studies, as schools, policymakers, and developers look for clearer signals on AI use, impact, and risk in classrooms.
Visualization of neural networks and data pathways, reflecting the growing complexity of AI research and its application in K–12 education
Stanford University’s AI Hub for Education has expanded its Research Study Repository, adding 133 new papers and bringing the total to 1,278 pre-print and peer-reviewed studies focused on generative AI in U.S. K–12 education.
The update highlights the pace of research activity, with the repository increasing by 11.5 percent in the latest cycle and continuing a monthly growth rate of between 10 and 15 percent.
The repository is designed to give education leaders, policymakers, researchers, and technology companies a centralized view of how generative AI is being applied, tested, and reviewed in K–12 settings. It organizes studies into three categories: descriptive research on how AI is used in classrooms and systems, impact studies including randomized controlled trials and quasi-experimental designs, and review papers that synthesize broader evidence.
Chris Agnew, Managing Director of the AI Hub for Education at Stanford University’s SCALE Initiative, shared details of the update in a LinkedIn post, highlighting both the scale of growth and the need for critical evaluation as more research becomes available.
Two-track approach to AI evidence in education
Alongside the repository, Stanford’s AI Hub for Education maintains a second resource, The Evidence Base on AI in K–12: A 2026 Review, which focuses on assessing research quality and identifying trends. While the repository prioritizes coverage and timeliness, the annual review is designed to evaluate rigor and provide a clearer interpretation of findings.
The repository includes pre-published and emerging research, increasing access but also requiring users to assess methodology, relevance, and quality. The annual review aims to address this by identifying stronger evidence and drawing out key patterns for education leaders and decision-makers.
Focus areas include collaboration, skills, and assessment bias
Recent additions to the repository include studies examining how AI can support collaboration and measure student skills, as well as how bias may affect AI-driven assessment.
One study from the University of California, Irvine explores the use of AI to measure inclusion in collaborative problem solving, using both simulated conversations and a human-AI dataset based on a NASA task. Another project involving researchers from the Massachusetts Institute of Technology and the Instituto Politécnico de Bragança investigates how AI tools can support group collaboration, based on feedback from 33 K–12 teachers.
A separate study from the University of Georgia, ETS Educational Testing Service Canada, Inc, and The Hong Kong Polytechnic University examines bias in automated scoring for English Language Learning students, testing a framework to reduce bias in grade eight science assessments.
Volume increases pressure on interpretation
The repository aims to include all relevant research on generative AI in U.S. K–12 education, with inclusion guided by relevance to audiences including district leaders, government bodies, education organizations, and product teams.
As the volume of research continues to expand, the challenge is shifting from access to interpretation. The combination of rapid updates and varying levels of evidence quality means schools and policymakers are likely to rely more on synthesis tools and structured reviews when making decisions about AI adoption, procurement, and classroom use.