Amazon unveils 63 new AI research projects as universities push deeper into agentic AI and security
Amazon has announced 63 recipients for its spring 2025 Amazon Research Awards, selecting researchers from 41 universities in eight countries.
The program provides unrestricted funds, AWS Promotional Credits, access to more than 700 Amazon public datasets, and direct consultation with Amazon research staff.
Amazon Research Awards supports academic work across multiple fields. This cycle covers five calls for proposals: AI for Information Security, Amazon Ads, AWS AI Agentic AI, Build on Trainium, and Think Big. Amazon reviewed proposals for scientific merit and potential societal impact, with the company encouraging code releases, research publications, and participation in Amazon-hosted workshops.
Christine Silvers, AWS Principal Healthcare Advisor, says “Amazon Research Awards are enabling incredibly impactful work to improve human health—from revolutionizing and democratizing structural biology tools, which can accelerate discovery of candidate molecules for new drugs to help patients, to predicting the etiology of a stroke in order to start the appropriate therapies, or interpreting digital phenotyping data to help with mental health services.
“These are just three examples of projects that recipients have received Amazon Research Awards for. The potential for improving healthcare amongst all of the spring 2025 plus past and future awardees is staggering and inspiring.“
Researchers selected for AI security and agentic AI
Several projects in the AI for Information Security call examine emerging risks in machine learning systems, including interpretable vulnerability detection, secure key management for confidential computing, and safe API discovery for agentic AI. Work at Northeastern University, the University of California, Berkeley, and the University of Southern California focuses on understanding how large language models can be exploited and how to design more secure interfaces.
In the Amazon Ads call, researchers at the University of Illinois at Urbana–Champaign and the University of Virginia are studying “Adversarial Misuse of Large Language Models in Digital Advertising: Benchmarking and Mitigation.”
The AWS AI Agentic AI call covers more than 30 funded projects, with topics that include:
Fine-grained planning evaluation for web agents at Carnegie Mellon University
Safety measurement and mitigation for LLM-based agent interactions at Carnegie Mellon University
A retrieval-and-reasoning agent for cross-modality search at the University of California, San Diego
Digital phenotyping analysis using LLM-based assistants at Harvard University
Web automation frameworks using multimodal models at The Chinese University of Hong Kong
Studies on artificial collective intelligence and LLM community dynamics at Cornell University
The projects reflect increasing research attention on long-horizon reasoning, multi-agent coordination, automated computer use, and safety constraints for autonomous AI systems.
Infrastructure, genomics, and health research funded through Trainium and Think Big
Amazon has also funded 20 projects through its Build on Trainium call, supporting work on compiler reliability, multimodal robotic perception, sparse training, kernel optimization, and large-scale model training on AWS hardware. Research teams at Cornell University, the University of California, Irvine, the University of British Columbia, and the University of Illinois Urbana-Champaign are among those investigating new training techniques and efficiency improvements for foundation models.
Other funded projects apply Trainium to autonomous driving, genomic variant interpretation, and chemical modeling. Work at Waseda University and Kingston University London focuses on accelerating vision-language tasks and using language models for non-coding DNA variant analysis.
The Think Big call includes health-focused research such as “AI-powered prediction of ischemic stroke etiologies using multi-modal data” at Yale School of Medicine, “Leveraging Molecular Dynamics to Empower Protein AI Models” at the University of North Carolina at Chapel Hill, and SBCloud at Harvard Medical School, described as “A Transformative Model for Scalable Structural Biology Research.”
Yida Wang, AWS AI Principal Applied Scientist, says “Academic AI researchers face a fundamental challenge: advancing machine learning research and educating the next generation requires access to cutting-edge infrastructure that's both powerful and affordable. The Build on Trainium program directly addresses this barrier. We are working with leading AI research universities such as, UC Berkeley, Stanford, CMU, MIT, UIUC, UCLA, and many others.
“At CMU, researchers achieved significant improvements over state-of-the-art FlashAttention in just one week. At MIT, researchers trained 3D medical imaging models with 50% higher throughput and lower cost, reducing training time from months to weeks. Build on Trainium represents AWS's commitment to democratizing AI research through collaborative partnership with academia—fostering an environment where researchers experiment freely, students learn on production-scale infrastructure, and academic innovations shape the future of machine learning for everyone.”
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.