Anthropic Interviewer captures workforce tensions as AI reshapes creative and scientific roles
Anthropic has used a new Anthropic Interviewer tool to carry out 1,250 in-depth interviews with workers, creatives, and scientists, revealing a workforce that is embracing AI for productivity while wrestling with stigma, trust, and long-term impact.
Anthropic, the company behind Claude, has published new research into how people actually use AI at work, drawing on 1,250 interviews with professionals across the general workforce, creative industries, and scientific research.
The results show strong signs of optimism and productivity gains, but also consistent worries about job identity, peer judgment, and whether current systems can be trusted for critical tasks.
Across the full sample, most participants reported that AI saves them time and improves the quality of at least some of their work. Yet the same interviews highlight social stigma around AI use in offices and studios, limited trust in AI for core scientific decisions, and a split between how people say they use AI and what Anthropic has previously observed in real conversation data.
Anthropic is positioning this as more than a one-off survey. The interviews were all conducted using Anthropic Interviewer, a Claude-powered tool that automates qualitative interviews at scale and passes the transcripts back to human researchers. The company says it wants to use the system to track how AI affects work over time and to feed those findings into product development and policy recommendations.
New interview tool aims to fill the gap between usage logs and lived experience
Anthropic Interviewer sits at the center of the project. Rather than running a traditional study with small numbers of human interviewers, Anthropic built a system prompt and workflow that allows Claude to conduct structured, adaptive interviews with thousands of participants.
For this study, the tool was embedded into Claude.ai. Participants saw a dedicated module inviting them to take part in a 10–15 minute conversation about AI and their work. The same system will now appear as a pop-up for some Claude users as Anthropic begins a wider public pilot.
The company says earlier internal tools could only analyze what happened inside the chat window, prompts, responses, and task types. Anthropic Interviewer is designed to ask what happens after that: how outputs are used, which tasks people will not hand over, and what they want AI to look like in the future.
Three groups, one research question
To stress-test the tool, Anthropic recruited 1,250 professionals through crowdworker platforms, but only from people with another main occupation. The sample was split into three groups:
1,000 professionals across a broad mix of jobs, with the largest clusters in education, computing, and arts and media
125 creatives, mostly writers, visual artists, and other creative roles
125 scientists, spanning more than fifty disciplines from physics and chemistry to data science and food science
Each group had its own research goal, embedded into Anthropic Interviewer’s instructions. For the general workforce, the focus was to “understand how individuals integrate AI tools into their professional workflows, exploring usage patterns, task preferences, and interaction styles.” For creatives, the aim was “to understand how creative professionals currently integrate AI into their creative processes, their experiences with AI’s impact on their work, and their vision for the future relationship between AI and human creativity.” For scientists, the tool was asked “to understand how AI systems integrate into scientists' daily research workflows, examining their current usage patterns, perceived value, trust levels, and barriers to adoption across different stages of the scientific process.”
Once interviews were complete, Anthropic Interviewer and a separate AI analysis tool clustered themes, estimated how often certain topics or emotions came up, and surfaced example quotes. Human researchers then interpreted those patterns and wrote the analysis.
General workforce: time savings, quiet use, and plans to move up the stack
In the broad workforce sample, participants largely described AI as a practical help rather than a threat—at least in the short term. According to survey responses, 86 percent said AI saves them time and 65 percent said they were satisfied with the role AI plays in their work.
Underneath that, the qualitative interviews show a workforce that is still cautious about how visible their AI use should be. Anthropic reports that 69 percent of professionals mentioned social stigma around AI tools. A fact-checker told Anthropic Interviewer: “A colleague recently said they hate AI and I just said nothing. I don’t tell anyone my process because I know how a lot of people feel about AI.”
Views on job security were mixed. Forty one percent of respondents said they felt secure and believed human skills are irreplaceable, while 55 percent expressed anxiety about AI’s impact on their future. Within that anxious group, a quarter said they deliberately set boundaries around AI use—for example, always writing lesson plans themselves—and another quarter said they were already reshaping their roles, taking on more specialized, strategic, or relationship-driven tasks.
Anthropic’s emotional analysis, visualized in radar charts by occupation, suggests that most major job categories share a similar profile: high levels of satisfaction combined with noticeable frustration. That combination reflects a familiar pattern in workplaces adopting new technology—people can see clear benefits, but systems, policies, and workflows have not caught up.
Augmentation dominates in people’s stories
One of the more revealing sections of the study compares what professionals say about their AI habits with what Anthropic has previously seen in its Economic Index, which analyzes how people use Claude in practice.
When asked directly, 65 percent of interviewees said AI primarily augments their work, helping them think, write, or analyze more effectively, while 35 percent said AI primarily automates tasks. In contrast, Anthropic’s earlier analysis of Claude usage showed 47 percent of tasks as augmentation and 49 percent as automation.
Anthropic offers several explanations: the study sample may be different from the user base in the earlier index; people may adapt Claude’s outputs after the chat ends in ways that are not visible in logs; they may use multiple AI tools in parallel; or self-descriptions may simply drift toward a more collaborative, acceptable narrative.
At the same time, the interviews do show a consistent vision of where people want AI to sit. Many professionals described a future where routine work is automated and their own role shifts to supervision and quality control. Anthropic notes that 48 percent of interviewees said they were considering a move toward roles centered on “managing and overseeing AI systems rather than performing direct technical work.”
A pastor in the study captured both the appeal and the limits of that vision, saying, “...if I use AI and up my skills with it, it can save me so much time on the admin side which will free me up to be with the people,” while also stressing the need for “good boundaries” and avoiding becoming “so dependent on AI that I can't live without [it] or do what I'm called to do.”
Creatives: faster output, emotional whiplash, and real economic pressure
Creative professionals in the study reported some of the strongest productivity gains. Ninety seven percent said AI saves them time, and 68 percent said it increases the quality of their work. Interviewees described using AI to speed up research, unlock ideas, and take on more client work.
A novelist told Anthropic Interviewer, “I feel like I can write faster because the research isn’t as daunting,” while a web content writer said they had “gone from being able to produce 2,000 words of polished, professional content to well over 5,000 words each day.” A photographer described how AI-assisted editing had cut delivery time “from 12 weeks to about 3” and allowed them to “intentionally make edits and tweaks that I may have missed before or not had time for.”
Alongside those gains, the study surfaces ongoing stigma and financial risk. Seventy percent of creatives said they were actively managing how their use of AI is perceived. A map artist commented, “I don't want my brand and my business image to be so heavily tied to AI and the stigma that surrounds it.”
Economic anxiety is a recurring theme, especially in fields where synthetic content competes directly with human work. A voice actor said, “Certain sectors of voice acting have essentially died due to the rise of AI, such as industrial voice acting.” A composer worried that platforms could “leverage AI tech along with their publishing libraries [to] infinitely generate new music,” saturating markets with low-cost output. A creative director acknowledged that their own use of AI has upstream effects, noting, “I fully understand that my gain is another creative’s loss. That product photographer that I used to have to pay $2,000 per day is now not getting my business.”
All of the creatives in the sample said they wanted to stay in control of their work, but many admitted that the line between guidance and delegation can blur. One artist said, “The AI is driving a good bit of the concepts; I simply try to guide it… 60% AI, 40% my ideas.” A musician reflected, “I hate to admit it, but the plugin has most of the control when using this.”
Anthropic’s emotional breakdown shows that game developers and visual artists often report high satisfaction and high worry at the same time, while designers skew more toward frustration. Trust scores remain low across creative roles, suggesting that even heavy users are still unsure how far AI should be allowed into core creative decisions.
Scientists: low worry about jobs, high skepticism about reliability
The scientific sample presents a different pattern again. Researchers across physics, chemistry, biology, and data-heavy fields said they regularly use AI for literature review, coding help, and drafting, but are reluctant to rely on current systems for hypothesis generation or experimental design.
Trust and reliability concerns appeared in 79 percent of scientific interviews, with 27 percent specifically citing technical limitations. An information security researcher summarized the trade-off: “If I have to double check and confirm every single detail the [AI] agent is giving me to make sure there are no mistakes, that kind of defeats the purpose of having the agent do this work in the first place.” A mathematician echoed that sentiment, saying that after verification, “it basically ends up being the same [amount of] time.”
A chemical engineer highlighted another point of friction, saying, “AI tends to pander to [user] sensibilities and changes its answer depending on how they phrase a question. The inconsistency tends to make me skeptical of the AI response.”
Unlike many creatives, most scientists in the study did not see AI as an imminent threat to their roles. Some pointed to tacit lab knowledge that is hard to digitize. A microbiologist described working with a bacterial strain where crucial steps depended on subtle color changes, noting that such instructions “are seldom written down anywhere.” A bioengineer emphasized that “Experimentation and research is also… inherently up to me” and argued that “certain parts of the research process are unfortunately just not compatible with AI even though they are the part that would be most convenient to automate, like running experiments.”
Security and compliance rules were also limiting factors, particularly in classified or sensitive environments. One participant noted “a lot of ‘do's and don'ts’ with lots of security-oriented processes that must be put in place before the organization can allow us to use agentic frameworks, and even LLMs for example.”
Even so, demand for more capable tools is high. Anthropic reports that 91 percent of scientists said they would like greater AI assistance, especially for accessing and integrating large datasets and for generating new ideas. A medical scientist said, “I wish AI could… help generate or support hypotheses or look for novel interactions/relationships that are not immediately evident for humans.” Another researcher put it more broadly, saying, “I would love an AI which could feel like a valuable research partner… that could bring something new to the table.”
Anthropic’s next steps and the limits of the study
Anthropic is now using Anthropic Interviewer beyond this initial test. The company says it is working with cultural institutions such as LAS Art Foundation, Mori Art Museum, Tate and creative communities including Rhizome and Socratica to understand how AI augments creative work, and with AI for Science grantees to study how tools like Claude can support research. In education, Anthropic is partnering with the American Federation of Teachers on AI-focused training for hundreds of thousands of teachers, and using Anthropic Interviewer to explore how teachers and software engineers see AI changing their jobs.
The company is also candid about the limits of the current findings. Participants were recruited through crowdworker platforms and may not be representative of the wider workforce. Interviews were text-only, meaning emotional analysis is based solely on written responses. The study captures a single moment in time and cannot show how attitudes and usage evolve. And, as Anthropic itself notes, self-reported behavior does not always match objective usage data.
The team argues that releasing the full transcript dataset is one way to address those issues, allowing other researchers to interpret the material differently. It also sees Anthropic Interviewer as a tool for building a feedback loop between people’s real experiences and future models, rather than relying only on internal assumptions.
The report closes by emphasizing that workers are already renegotiating their relationship with AI, often in subtle ways. For many scientists in the study, the goal is not to step away from AI but to get something more than autocomplete. As one medical scientist puts it, “I would love an AI which could feel like a valuable research partner… that could bring something new to the table.”
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.