King’s College London moves to formalize AI ethics as research use accelerates
New workshops and policy work highlight a widening gap between rapid AI adoption in research and limited confidence in how to use it responsibly.
King’s College London is moving to formalize how generative AI is used in research after internal findings show widespread adoption among academics, alongside uncertainty over ethical practice.
The university’s Institute for Artificial Intelligence and Research Ethics Office have been working with researchers across disciplines to understand how AI tools are already being used in day-to-day work, from drafting and data analysis to shaping research questions. The work reflects a broader shift across higher education, where AI is no longer experimental but embedded, often ahead of formal policy.
Survey data collected in 2025 from 275 respondents shows 72% of researchers are already using AI tools, with 78% planning to use them in the future. At the same time, 81% place themselves in the lower to middle range when it comes to confidence in navigating the ethical implications.
Researchers move ahead of institutional guidance
The findings suggest that the debate within universities has shifted. Researchers are not asking whether to use generative AI, but how to use it in ways that align with research integrity, transparency, and existing ethical standards.
To respond, King’s structured its work around the UK Research and Innovation AREA framework—anticipate, reflect, engage, act—using it to guide both data collection and institutional response. An initial survey helped identify the scale of adoption and the types of concerns researchers are facing, from disclosure to data handling.
The results point to a clear gap: AI tools are already part of research workflows, but expectations around their use remain unclear or inconsistent.
Workshops surface practical tensions in AI use
Six workshops held in autumn 2025 brought researchers together to examine how AI is being applied in practice. Rather than formal training sessions, the events focused on discussion, allowing participants to share use cases, challenges, and uncertainties.
Researchers identified clear benefits, particularly in speeding up routine tasks and generating alternative approaches to problems. But these were balanced by concerns around confidentiality, uneven access to tools, and a lack of clarity about what constitutes acceptable use.
One method used in the sessions was “micronarratives,” where researchers worked with AI tools to co-create short scenarios based on real research situations. The approach was designed to expose assumptions and risks in context, rather than relying on abstract guidance.
The workshops also highlighted a cultural issue. Researchers indicated a need for shared expectations within departments, suggesting that without clearer norms, responsible use risks being either hidden or inconsistently applied.
Policy begins to follow practice
King’s is now using the findings to inform changes to its research governance processes, including updates to ethics review guidance and the development of training resources focused on practical scenarios.
The university also plans to continue engagement with sector partners, contributing its findings to wider discussions on AI use in research and higher education.
What emerges is less a question of adoption and more one of control. Researchers at King’s are already integrating AI into core academic work, often ahead of formal guidance. The institution’s response suggests universities are now in a catch-up phase—trying to define acceptable use after it has already become normalized. For leaders, the challenge is not introducing AI into research, but setting boundaries that are clear enough to follow and flexible enough to keep up.
ETIH Innovation Awards 2026
The ETIH Innovation Awards 2026 are now open and recognize education technology organizations delivering measurable impact across K–12, higher education, and lifelong learning. The awards are open to entries from the UK, the Americas, and internationally, with submissions assessed on evidence of outcomes and real-world application.