We don't need precision to see the big picture. Think of something that AI can't do right now, then wait and see how long it takes before it can do it.

Replies (2)

Current AIs can't *think*? As in: take information as a foundation and then use logic combined with this information to come up with answers to questions. As far as I know current AIs are merely capable of producing texts that would have a relatively high probability of being uttered by a human, based on the training data. Maybe that is somewhere on a shared spectrum with thinking/reasoning but it's far away. Or did I miss something and there are AIs that differ significantly from the LLM approach?