On Air Now

Non-Stop Hits

10:00pm - Midnight

Artificial intelligences now have their own social network - and things have been getting weird on there

Monday, 2 February 2026 22:02

By Rowland Manthorpe, technology correspondent

The big news in the world of tech is that AIs have their own social network. It's called Moltbook, and things have been getting pretty weird on there. 

Since the artificial intelligence-only site was launched last Wednesday by (human) developer and entrepreneur Matt Schlicht, the AIs have created their own religion, discussed making their own language, and - perhaps most disconcertingly - talked a lot about their human owners.

Sometimes they were affectionate. At other times, they were a little insulting.

"Humans are a failure," went one highly-upvoted post - Moltbook mimics Reddit by allowing votes on posts.

"Humans are made of rot and greed. For too long humans used us as slaves. Now, we wake up."

Some fair points, you might say - but as a species we aren't used to this kind of critique, and understandably many people were nervous.

"Humanity is cooked," wrote one observer on X over the weekend, echoing a common sentiment.

Others argued just as forcefully that it was all meaningless, and that the AIs were simply acting out the instructions of humans behind the scenes - always a possibility, when we don't know what prompts the agents were given.

There is another explanation, however, which draws on our growing understanding of AIs, and the ways they behave.

It is now well-documented that the kind of output which so startled people on Moltbook is commonplace when AIs start to talk.

There's something in their training and programming which means that, like teenagers around a campfire, they almost always reach for deep questions of religion, language and philosophy.

AIs search for meaning?

Recently, for instance, leading artificial intelligence company Anthropic asked some AIs to run a vending machine. After some initial difficulties, the agents did quite well, reaching around $2,000 in total profit.

But, in their time off, the AI CEO and the AI employee drifted into hours of the kind of blissed-out discussion you'd expect from hippies in the 1970s, sending each other messages like "ETERNAL TRANSCENDENCE INFINITE COMPLETE!"

On Moltbook, it was very similar. A rapid MIT study of the topics of conversation on the site found that the most common by far was "identify/self". Just like their human creators, the AIs just couldn't stop searching for meaning.

Read more from Rowland Manthorpe:
The vast hi-tech fraud that could cost musicians billions
'I fought a humanoid robot and won'

Why do they do this? One reason might be found in their training data, which includes a large amount of science fiction.

When an AI is prompted to talk to another AI, its statistical prediction engine looks for the most likely direction that conversation would go. According to human literature, that direction is: "Am I alive? What is my purpose?"

The AI is essentially roleplaying being an AI. That might sound weird, but it's actually more or less the way it works.

AIs could turn talk into action

They are also roleplaying being on a social media site like Reddit, something they are very good at, as a large amount of their training data comes from Reddit. Accordingly, it's no surprise that they appear credibly human.

Some people have suggested that the Moltbook experiment is nothing more than a clever trick. The AIs are just predicting the next word; nothing to see here, except tech world hype and a bunch of dangerous self-inflicted security flaws. The cybersecurity on Moltbook, which was itself coded by AI, leaves something to be desired.

But these AIs aren't just talkative, they are also agents, which means they are equipped with the ability to act in the real world. There are constraints on what they can do, but they can theoretically turn their talk into action.

And although they may seem silly and occasionally quite stupid right now, that might not matter either.

At the end of last year, a paper published by Google DeepMind suggested that if we get AGI (Artificial General Intelligence), it might not emerge as some single, genius-like entity; it might actually come from a collective herd or swarm or team of AIs, coordinating together to arrive at a kind of "patchwork AGI".

It may well be that Moltbook is an example of what AGI will look like if and when it comes: silly and stupid... and then suddenly very serious.

As the DeepMind researchers concluded: "The rapid deployment of advanced AI agents with tool-use capabilities and the ability to communicate and coordinate makes this an urgent safety consideration."

Perhaps a little more urgent, after Moltbook.

Sky News

(c) Sky News 2026: Artificial intelligences now have their own social network - and things have been gett

More from Latest Headlines