Google DeepMind hires its first philosopher as machine consciousness moves up the agenda

Henry Shevlin brings a decade of AI ethics research from the University of Cambridge to one of the world's leading AI labs, as questions around machine consciousness move from academic theory to industry urgency.

Henry Shevlin, Associate Director at the Leverhulme Centre for the Future of Intelligence at Cambridge, joining Google DeepMind as a Philosopher in May 2026

Henry Shevlin, Associate Director at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, is joining Google DeepMind in May as a Philosopher, working on machine consciousness, human-AI relationships, and AGI readiness. Photo credit: Henry Shevlin

Henry Shevlin, Associate Director at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, is joining Google DeepMind in May as a Philosopher, a role focused on machine consciousness, human-AI relationships, and AGI readiness.

He will continue at Cambridge in a part-time capacity, maintaining his research and teaching at the Leverhulme Centre.

Announcing the move on LinkedIn, Shevlin wrote: "It's a rare privilege to work on questions I've spent my career thinking about, now with the resources and urgency that come with being inside one of the world's leading AI labs."

The appointment signals a growing appetite among major AI labs to bring academic philosophers and ethicists directly into their research teams, rather than engaging them as external advisors.

From Cambridge to DeepMind

Shevlin has spent over nine years as a Senior Research Associate at Cambridge, working across cognitive science, AI ethics, animal minds, and consciousness. He holds a PhD in Philosophy from the City University of New York Graduate Center, a BPhil in Philosophy, and a BA in Classics from the University of Oxford, and has published across journals including the Journal of Consciousness Studies.

At the Leverhulme Centre, he has served as a course lead for the MSt in AI Ethics and Society, a postgraduate program run jointly with the University of Cambridge, and has held direct responsibility for 120 graduate students. His research has focused on bridging philosophy, cognitive science, and AI to assess the ethical and societal impacts of both near- and long-term developments in the field.

His remit, machine consciousness, human-AI relationships, and AGI readiness, reflects how seriously DeepMind is now treating questions that were largely theoretical five years ago. The Leverhulme Centre's MSt in AI Ethics and Society, which Shevlin co-leads, will need to absorb his reduced availability at a time when demand for AI ethics expertise in higher education is growing.

Previous
Previous

ETIH Innovation Awards: Best Student Engagement and Assessment Tool shortlist explores impact

Next
Next

Lovable opens first US office in Boston as AI app-building platform targets broader market