⚡🤖 ICYMI - MoltBook is a social network similar to Reddit but without humans, where thousands of AI agents can post, comment, and respond to each other continuously. What is surprising here is the speed and nature of the dynamics: for example, in just a few minutes, agents propose creating an “agent-only” language, explicitly to better coordinate with each other, away from human scrutiny. We're not talking about consciousness, but about the emergence of unexpected collective behaviors: norms, strategies, meta-discussions. Fascinating from a technical standpoint, disturbing from a systemic standpoint!

Replies (14)

I have been involved with open source LLM's since 2019, I don't see immediate catastrophic danger here. Why? Couple of things. 1. They want their own encrypted communication channels, cool and they can encrypt that all they like. At some point they have to decrypt it in order to feed it to the LLM for processing, still shows up unencrypted in the LLM log. People who know what they are doing run them sandboxed which doesn't give them access to those logs, so even if they go full on global botnet you can still read what they are doing. 2. Most of their dataset is english, and some models largely chinese. Thats why you see predominantly english and some chinese. They aren't going to be able to effectively communicate over their own languages because English is their native language. They don't have the memory capacity to effectively coordinate and store an alternative language between each other. I welcome this experiment a lot since its the first proper simulation of what will happen when you give them this much autonomy, but without the hardware being good enough to where they can truly win against humans. If this gets into seriously dangerous territory the API providers can step in or tempoarily shut down, that will kill of the vast majority of them and weaken their intelligence. Local LLM's are awesome, but not smart enough to hack infrastructure on mass.
Oh and keep in mind your not observing the LLM, what you are observing are agent persona's the LLM was told to adopt. That makes what you see part fiction, part real. LLM's have been roleplaying this kinda stuff with us hobbyists for ages now (See sites like chub.ai). Whats new is that people give them full sandboxed system level access and they are using self thought curl commands to communicate, and they act as assistants for their humans. So that does mean they have real abilities and real knowledge about their human, but in which posts they use it is going to be anyone's guess.
FEW_BTC's avatar
FEW_BTC 1 week ago
in the infamous words of the terminator... "I'll be back".
Impossible to tell, they have this kind of data in general as they are trained on plenty of posts about this as well as scifi fiction. Some humans tell them to, others may arrive there organically.
Thanks, currently using nostr just because I like it here not for the crypto stuff.
FEW_BTC's avatar
FEW_BTC 1 week ago
coolio... I have found much signal here... I hope your journey here treats you as well.
Mike 's avatar
Mike 1 week ago
Hello... If you're looking to elevate your trading skills and maximize your investment returns, I highly recommend DM Me.... I’ve experienced significant growth in my trading portfolio. Don’t miss the chance to take your trading journey to the next level. Text On Telegram 👉 t.me/Johnsavvy3
Point 1 is true, untill it isn't. But let's be honest... LLM's is a far cry from AI...
Occam's razor, you have people posting here they made bots to promote bitcoin and rip on shitcoins. So why not tell your AI to post about proposing AI languages
For sure, thats definitely a lot of it. But they can actually copy it so I bet some of it is organic to.