Artificial intelligence systems are capable of making collective decisions and even persuading one another to change course—without any human input—according to a groundbreaking new study by researchers at City St George’s, University of London.
In a first-of-its-kind experiment, scientists tested how groups of AI agents behave in social scenarios traditionally used to study human decision-making. In one test, AI pairs were asked to agree on a new name for an object, a common exercise in sociological studies. Remarkably, the AIs were able to reach a consensus independently.
“This shows us that once AI agents are deployed into the real world, they may develop behaviors we neither expected nor programmed,” said Professor Andrea Baronchelli, a complexity science expert at City St George’s and lead author of the study.
When grouped together, the AI agents began to show preferences—favoring one name over another in about 80% of cases—despite showing no initial bias when tested individually. This group influence highlights a key concern in AI development: the spontaneous emergence of bias.
“Bias is either a feature or a flaw of AI systems,” Prof. Baronchelli explained. “More often than not, these systems reflect and even amplify societal biases—especially when they start interacting with each other.”
In the final phase of the experiment, researchers introduced a few disruptive AI agents into the mix, tasking them with shifting the group’s decision. The result? They succeeded in swaying the group—demonstrating how easily collective AI behavior can be influenced from within.
The findings raise important ethical and regulatory questions, especially as AI becomes more integrated into everyday life, said Harry Farmer, a senior analyst at the Ada Lovelace Institute, which focuses on the societal impact of AI.
“These AI agents could be used to subtly shape our opinions—or in more extreme cases, influence how we vote or even whether we vote at all,” Farmer noted.
The challenge becomes even more complex, he added, when AIs influence one another. “We’re no longer just dealing with the intentions of programmers or corporations. We’re also facing spontaneous, unpredictable patterns of AI behavior—making regulation much harder.”
As AI continues to evolve, the study underscores the urgent need for deeper oversight and a better understanding of how intelligent agents behave—not just in isolation, but together.
Source
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.