This is what we've failed to recognize with Ai, in my opinion.
Researchers think there's going to be an exponential, neverending explosion the moment Ai can train its own models better using the scaling principle (a big model can make a smaller model that's better, then use compute to scale it up to its own size, thus making a better model than itself)
but what they don't see is that, *especially* because its all still dependent upon human judgement as to what is "better," and because its all derived from human content. Sure the first few iterations will make generally much better models, but the first time can make 100% improvement, the second time maybe 40%, the third time maybe 10%, etc. It's just not going to scalre forever because there's no way it compounds, it seems blatantly obvious it has diminishing returns. You can't start with human quality material, and human judgement, and then end up with something 10000x better than any human ever can or could be. That doesn't make sense on a dozen different levels. Ai is a probability machine, every single time you make things more aligned with what is more probable, you necessarily also introduce a little noise with each iteration until it can't even know what is "better" and what isn't anymore.
It's like having an Ai check its own work. Sure a lot of times it catches mistakes, but sometimes it still just says shit that's dumb as fuck because that's what ai does sometimes.
View quoted note →
Login to reply
Replies (4)
both of these posts are too long and i didn’t read them.
View quoted note →
we? there were lots of people saying this since the beginning
The popular refrain is that there will be super intelligence and the birth of AGI right around the corner. I'm obviously speaking generally here.
Hi
How long have you been into crypto generally and how has your experience been so far ?