My old research advisor has a couple of PhD candidates exploring the idea of millions of AI LLM agents per person running at the same time for pennies. Basically, everybody becomes a massive corporation and has full teams for everything. What would you do with that power?

Replies (38)

SimOne's avatar
SimOne 1 week ago
We have no idea what’s coming.
Fabiano's avatar
Fabiano 1 week ago
Classical education. Trivium and Quadrivium programs.
It depends on whether the world would be running on fiat or hard money.
Havok's avatar
Havok 1 week ago
Turn off the power and wander off into the woods, forever.
Id be flying down a large sleigh pod from a massive skyscraper they built for me to cross the atlantic (it goes underwater too) while watching/zapping a live stream of zapfeeding different animals at the zoo they made for me while two of the bots giving me foot massages and another cooking gourmet meals.
notstr's avatar
notstr 1 week ago
Turn myself into a chicken... Bgock!
yes, *you* gave the command. they are not autonomous, or capable of operating autonomously without constant supervision. i doubt this will be possible before another 4 years time, with the recursive learning models and non-translated thinking cutting down the memory requirements for them.
But honestly, probably trying to guess the future, I would feel like doctor stranger in the battle for eternity episode
CypherPvnK's avatar
CypherPvnK 1 week ago
If everybody has that superpower, then it is not a superpower. Scarcity, effort and sacrifice will make the difference.
I think most AI researchers (academics) forget the fact that AI needs electricity and chips and cooling .. cost of intelligence will increase exponentially . ..it is NOT a PC or smartphone or internet build out ..
No, they know it. They just account for performance improvements and more energy at the same time. Solar panels style. I don't know if we are going to get millions LLMs per person, but thousands? That's very possible in 10-15 years.
That is exactly.. performance improvement in AI is not same as that in Traditional computing .. most people make this mistake .. it is actually quite the opposite .. marginal increase in multi modal intelligence need far more energy than mediocre single track AI ..
Sara Smith's avatar
Sara Smith 1 week ago
Don't know. No idea. I need to think about that
The Transformer architecture is not the only way to do AI. It is still not energy-efficient, so there is a lot of room for optimization. Remember when DeepSeek was released? Nvidia’s stock price suddenly crashed because people realized that you don’t need expensive H200 GPUs with 192 GB of HBM3e memory and 8 TB/s bandwidth to achieve the same results simply by fine-tuning the software.
So everyone becomes retarded and unable to think for themselves and requires bots to control how they think. Wow, sounds like utopia for pedophiles
What power? That many agents would need to be in clouds not under your control. Everyone would have a little slice. They belong to someone else. Don't be too sure they are not designed to deeply surveil and control what you think and even do when you don't own them. And BTW in the US and many countries all corporations MUST share everything they know about you pretty much on demand. The power of such a setup likely flows against you, not for you.
Default avatar
ngold 6 days ago
How are you going to run decent LLMs in such volumes for pennies? Hell, how are you even going to run millions of 7B models for pennies? This looks like a complete fantasy to me.
We didn't even have LLMs 3 years ago... so lots of things can still happen. In fact, most researchers agree that the current way of building/using LLMs is not in its final form. In fact, many folks I know don't even think we will be using "LLMs" anymore. There will be a new way of building this. Just last month, MIT published an article capable of removing the context size limit completely by allowing the model itself to ask for the context instead of having to upload all instructions together with a huge context file. We don't even have truly distributed LLMs yet. There is so much to come.