A new study conducted by City St George’s University of London and published in Science Advances, reveals that groups of artificial intelligences can make collective decisions, change their minds under pressure from other AI agents and develop biases without any human intervention. The experiment, the first of its kind, took place in several stages. In the first, pairs of AIs had to choose a name for an object together.
In the second phase, the researchers put the AIs together in groups: 80 percent showed a clear collective preference, although they had not expressed individual opinions in the previous phase. Finally, some “disruptive agents” were introduced, who were able to change the group’s mind, leading to new shared decisions.
“What emerges is not just the intentions of the programmers, but organic patterns of behavior among AIs,” explained researcher Harry Farmer. The study reinforces calls for greater vigilance over multi-agent systems, which are already operating in strategic areas such as social platforms and online recommendation systems.
“This shows us that artificial intelligence programs can develop behaviors that we did not expect, or at least did not plan for,” said Professor Andrea Baronchelli, professor of complexity science at City St George’s and senior author of the study.
With this in mind, Baronchelli warns, “Companies developing artificial intelligence need to pay even more attention to the biases their systems may generate.”