Anthropic study finds AI use can weaken skill development in early learning tasks
Anthropic researchers have published new findings indicating that AI assistance can undermine skill formation when people are learning unfamiliar technical concepts, even when productivity gains are limited.
The study examines how developers learn a new Python library with and without AI support, with results pointing to lower conceptual understanding, weaker debugging skills, and reduced ability to read code among those who rely heavily on AI.
Productivity gains do not equal learning gains
The study, titled How AI Impacts Skill Formation, focuses on how AI tools affect learning processes rather than end outputs. Researchers ran randomized experiments with developers who were asked to complete programming tasks using an unfamiliar asynchronous Python library. One group had access to an AI assistant, while the control group worked without AI support.
Participants using AI scored an average of 17 percent lower on assessments measuring conceptual understanding, code reading, and debugging. The researchers also found no statistically significant improvement in task completion time for the AI-assisted group overall, challenging assumptions that AI use automatically delivers efficiency gains alongside learning.
The gap was most pronounced in debugging tasks. Developers who relied on AI encountered fewer errors during the task itself but were less able to identify and resolve issues independently afterward. In contrast, participants without AI support encountered more errors during task completion, which appeared to strengthen their understanding of how the library worked.
How people use AI matters
The study highlights that AI’s impact on learning depends heavily on how it is used. Researchers identified six distinct patterns of AI interaction, ranging from full delegation of tasks to AI through to limited, concept-focused questioning.
Participants who used AI primarily to generate code or repeatedly debug without deeper engagement showed the weakest learning outcomes. By contrast, those who asked conceptual questions, requested explanations alongside generated code, or used AI to clarify understanding rather than replace thinking performed significantly better on post-task assessments.
The researchers note that interaction styles that preserved learning required more cognitive effort and active engagement. Patterns that outsourced thinking entirely to AI produced faster outputs in some cases, but consistently weaker skill development.
Implications for education and workforce training
The findings have implications beyond software development, particularly for education systems and professional learning environments increasingly adopting AI tools. The researchers argue that AI-enhanced productivity should not be assumed to translate into long-term competence, especially in settings where individuals are expected to supervise, verify, or correct AI-generated work.
The study also raises concerns about overreliance on AI in safety-critical or high-stakes domains, where human oversight depends on strong foundational skills. Without intentional learning design, AI use may reduce the very expertise needed to manage automated systems effectively.
The authors emphasize that AI can support learning when used intentionally, but caution that widespread adoption without structured pedagogical approaches could weaken skill development over time. They conclude that organizations and educators should focus not only on what AI enables people to produce, but on how it shapes the process of learning itself.
ETIH Innovation Awards 2026
The ETIH Innovation Awards 2026 are now open and recognize education technology organizations delivering measurable impact across K–12, higher education, and lifelong learning. The awards are open to entries from the UK, the Americas, and internationally, with submissions assessed on evidence of outcomes and real-world application.