
Anthropic just published what may be the largest qualitative research study in history. Not a survey. An interview study. 80,508 people. 159 countries. 70 languages. The previous record holder was the World Bank’s Voices of the Poor project at around 60,000 participants.
The scale is wild. The methodology is wilder.
Anthropic’s 81,000-person AI interview study found that hope and fear about AI are not opposing camps. They live in the same person. The world’s poorest countries see AI as an opportunity while the richest see it as a threat, and the single strongest predictor of negative AI sentiment is concern about economic disruption. Professional excellence was the top aspiration, but the sample skews toward early adopters.
How Anthropic Built an AI Interviewer
To pull this off, Anthropic built a tool called Anthropic Interviewer, a version of Claude designed to conduct real qualitative interviews at scale. It works in three stages: a planning phase where human researchers and Claude co-develop an interview rubric, a live interview phase where Claude adapts follow-up questions in real time based on what each person says, and an analysis phase where Claude-powered classifiers work through the transcripts to find patterns across the whole dataset.
Depth and volume at the same time. That has never really been possible before.
Who Actually Wants “Professional Excellence”?
The biggest aspiration cluster, at nearly 19%, was professional excellence. People wanting AI to clear the routine so they can do more meaningful work.
That finding is interesting. But the pool is Claude users. Early adopters. People with enough investment in AI to opt into a research interview on top of using it daily. This skews heavily toward high-conscientiousness, mastery-driven people. The ISTJs and INTJs of the world. People for whom professional identity is not just what they do but who they are.
For that type, AI is not a shortcut. It is a capability multiplier that removes friction between intention and execution. Of course professional excellence tops the list. The more interesting question is whether that holds as AI reaches people for whom work is less central to identity.
Why Poor Countries See Opportunity and Rich Countries See Threat
Here is the finding that should stop you cold. In the world’s poorest countries, AI is seen as an opportunity. In the richest, it is seen as a threat.
An entrepreneur in Cameroon described reaching professional-level skills in cybersecurity, UX design, and marketing simultaneously. “It’s an equalizer,” they said. Respondents in Sub-Saharan Africa were twice as likely as North Americans to say they had no AI concerns at all.
Meanwhile, concern about economic disruption was the single strongest predictor of negative AI sentiment across the entire study. The regions with the most to lose from disruption are the most worried.
The logic is simple: when you have professional infrastructure, credentials, and decades of hard-won expertise, AI looks like a threat to what you built. When you never had access to any of that, AI looks like a ladder. Both responses are completely rational. That is what makes this moment so complicated.
Hope and Fear Live in the Same Person
The report’s sharpest finding is this: optimists and pessimists are not different people. They are the same person. Someone excited about AI for emotional support is three times more likely to also fear becoming dependent on it. The freelancers gaining the most from AI are also the most exposed to being replaced by it.
The tool and the threat are the same thing.
What This Means for AI Strategy
I’ve spent the last two years helping organizations build AI strategies, and the pattern in this study maps exactly to what I see in boardrooms. The executives most resistant to AI adoption are almost always the ones with the deepest domain expertise. They built careers on knowing things that were hard to know. AI doesn’t just change their workflow. It threatens the scarcity that made them valuable.
The leaders who move fastest tend to be the ones who already had something to prove. Younger executives, people in emerging markets, leaders in organizations that were already behind. They have less to protect and more to gain. This is the same dynamic the Anthropic study found at a global scale, playing out in every AI governance conversation I’ve been part of.
81,000 people just told us clearly that the tool and the threat are the same thing. In my experience, the organizations that succeed with AI are the ones honest enough to hold both of those truths at the same time. Investing aggressively while acknowledging that the disruption is real, personal, and not evenly distributed. The ones that pick a side, all-in enthusiasm or reflexive resistance, tend to get it wrong. If you’re building an AI adoption strategy, start by accepting that the people in the room are probably feeling both.