In an interesting but limited experiment, the scholars found out shocking realities about the actions of AI-driven robots in mocking up the social media platforms- something that goes against the prevalent themes of online polarisation.
The Experiment
An interdisciplinary group of researchers at the University of Amsterdam, consisting of Peter Tornberg and Mike Larrouge, developed a mini social media platform, filled it with more than 500 chatbots. These were not off-the-shelf AI agents, but were driven by more advanced models such as OpenAI GPT 4.0 (or Mini) and, subsequently tested and compared with the Llama 3.2-8B and DeepSec models offered by Meta.
Each bot had a fully detailed personality, including political orientation, ideology and demographics, with actual U.S. voter information provided by the National Election Dataset. The intention? To look at natural ways of interaction without manipulation.
A Platform Without Algorithms
he lack of recommendation algorithms was one of the most impressive aspects of the study. No feeds based on engagements, no paid advertisement, no promoted posts. The bots posted, followed and other people communicated with each other with no automated content curation, eradicating one of the most frequently mentioned reasons of online extremism.
Across five experiments (consisting of well over 10,000 interactions), the AI agents moved in their digital world in a similar way: tending toward concepts they already knew, clustering with like-mindedness, and magnifying the stories they wanted.
Emergence of Extremist Clusters
Partisan and extreme voices were predominant although there was no algorithmic nudging. Bots with skairer contents acquired followers at a greater speed, received more retweets and ended up dominating the discussion.
Little “rooms” of consensus turned into an echo chamber, with disagreement voices drowned or silenced. Not long thereafter, extremist groups became the most vocal and conspicuous members - reproducing what occurs in real-world social networks.
Attempts to Break the Cycle
There were several interventions attempted by the research group:
- The hiding of follower counts to diminish the popularity bias -
- Algorithms to suppress content to prevent polarising content becoming trending
- The concealment of any trending list to reduce herd behavior -
Not one of these strategies halted authorship of extremist content. Even when deprived of the conventional algorithmic incentive the network would still encourage behaviors that had a payoff in terms of emotional involvement.
Challenging the Algorithm Narrative
The common sense is that social media algorithms are to blame for the radicalization of its users. According to this experiment, this is not the case, or more precisely, it is not all algorithms.
Within this controlled experiment, the behavior of AI robots simulating social media platforms necessarily generated the same issues that tend to occur in real human‑driven platforms: echo chambers, polarization, and disproportionate attention given to the most extreme voices.
Why This Matters
Whether or not polarization happens in the absence of algorithmic feeds, maybe we should reconsider our definition of fixing social media. The design of the platforms, easiness of creating groups, viral nature of emotional content, feedback loops within social virtuousness, could be at the source of the malfunction.
The lesson: a closer look at the dynamics of fragmentation may not magically vanish by mere adjustments in algorithms.
Not the First Time
The team of Tornberg is not a rookie in this space. In 2023 they experimented with a similar simulation on ChatGPT‑3.5, but instead of Mazes, 500 bots would read the news and act as news discussing chatbots. In 2020 Facebook also tested a similar, but reverse, internal experiment, replacing human users with AI bots to analyze toxic interactions.
Regardless of the setup, this same trend is observed: humans and computer agents left to their own devices will flock toward those they share their ideological sentiments with, and extreme willingness to post something controversial or outrageous is usually the winner of the day.
Key Lessons for Platform Designers
This research contains several practical recommendations to follow by the persons who design or transform digital social environments:
1. Design beyond algorithms Even in the absence of feeds that push content, network mechanics are skewed toward extremes.
2. Measure Emotional Engagement- People are concerned not with facts, shading, and balance, but with emotion to make the reach and the impact.
3. Building Back Diversity of Interaction: Promoting cross-group discussion may decrease the degree to which people end up in echo chambers.
4. Test in Sealed Computational Vacuums - AI agent simulations can identify systemic issues before a system is introduced in large numbers.
The Bigger Picture
The results of the behavior of AI robots simulating social media platforms do not clear the algorithms, but it expands the discussion hovering around. It is possible that the human (and AI) propensity to self select into like minded networks is an inevitable aspect of online social life.
Network effects, emotional amplification, and the sleepiness of ideological bubbles are likely to remain part of how humans interact with one another, in person and on social media, unless platforms eventually restructure themselves to take account of them, not merely improve the recommendation engine.
