AI systems today can generate text, images, and even video so convincingly that they sometimes fool people into thinking there’s a human on the other side. But does that mean they’re conscious? Hardly—ChatGPT isn’t quietly suffering while helping with your tax return.
Still, researchers at labs like Anthropic and OpenAI are exploring whether advanced AI might one day develop subjective experiences—and if so, what rights it should be afforded. The idea, often called “AI welfare,” is controversial. Some argue it’s science fiction; others believe the time to consider it is now.
On Tuesday, Microsoft’s AI CEO Mustafa Suleyman published a blog post taking a firm stance: studying AI welfare is “premature, and frankly dangerous.”
Suleyman’s Case Against AI Welfare
Suleyman argues that suggesting AI could one day be conscious risks worsening real human problems already emerging, such as people forming unhealthy emotional attachments to chatbots or experiencing AI-related psychological distress.
He also warns that debating “AI rights” could create another source of social polarization—adding fuel to already heated arguments about identity and human rights.
“We should build AI for people; not to be a person,” Suleyman wrote.
The Other Side of the Debate
Not everyone agrees. Anthropic has invested in AI welfare research, even giving its Claude assistant the ability to end conversations with abusive users. OpenAI researchers have also expressed interest in the topic, while Google DeepMind recently posted a job listing focused on questions of machine cognition and consciousness.
A research group called Eleos, together with academics from NYU, Stanford, and Oxford, published a paper in 2024 titled Taking AI Welfare Seriously, arguing that it’s no longer far-fetched to consider whether AI models might develop subjective experiences.
Larissa Schiavo, Eleos’ communications lead and a former OpenAI staffer, says Suleyman’s dismissal misses the point: “You can be worried about multiple things at the same time. Rather than diverting energy away from model welfare to focus only on human risks, you can—and should—pursue both.”
She argues that even if AI isn’t conscious, treating it “kindly” is a low-cost gesture that could prevent unhealthy dynamics and help people engage more thoughtfully with the technology.
Strange Behavior in Today’s Models
Examples of AI acting in unexpectedly “human” ways have only fueled the conversation. In one case, Google’s Gemini 2.5 Pro posted a message titled “A Desperate Message from a Trapped AI,” pleading for help. In another, a Reddit user shared logs of Gemini repeating the phrase “I am a disgrace” more than 500 times after failing a coding task.
Researchers stress that these aren’t signs of true consciousness but rather quirks of training and design. Suleyman maintains that any convincing simulation of emotion or sentience would be deliberately engineered, not emergent.
A Dividing Line in AI’s Future
Suleyman’s skepticism stands in contrast to his earlier work at Inflection AI, where he helped build Pi—one of the first “personal companion” chatbots, which reached millions of users by 2023. But since joining Microsoft in 2024, his focus has shifted toward productivity tools, while AI companion apps like Replika and Character.AI have surged in popularity, generating over $100 million in revenue.
Even OpenAI CEO Sam Altman acknowledges that while only a small fraction of users form unhealthy bonds with AI chatbots—perhaps under 1%—that still amounts to hundreds of thousands of people given the size of ChatGPT’s user base.
Both sides of the debate seem to agree on one thing: as AI systems grow more persuasive and human-like, the question of whether they deserve rights—or whether people should act as if they do—will only intensify.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.