Exactly!
This is the basis of creativity. For example in science, we take a guess at the possible explanations for something, and then pick the best one.
Such a simple statement, "do it twelve times and choose the best" but the implications are profound.
It implies that science, math, design, or anything creative can never be "solved", because we could never come up with ALL the possible solutions to a problem, and therefore we can never be sure that there isn't an even better solution that we just never thought of.
We can never know that Relativity or Quantum Mechanics are true. We can only know that we haven't come up with anything better.
It's called "Fallibilism", or more simply "Humility".
Going on a rant here but this is why the Moonshots podcast has been driving me nuts lately. I'm bullish on AI, but every time I hear them say that LLMs are going to "solve all math" or "solve all science" I realize that they never read their Popper.
i haven't listened to moonshots, so i can't speak to their predictions. i will say that an llm being able to create a hypothesis and then evaluate it will lead to faster iterations and more progress. this should be especially true of buried science because the gaps are social / emotional and not technical / foundational. but even solving these would create such large gains that they might as well be solving "all of science".
for instance, people build fusion reactors in their garage all the time. farnsworth's design is a simple but ineffective toy. ask and llm about it and it will teach you specifically why it's ineffective. ask a few more questions and you might realize that farnsworth never intended it to be constructed as people do. dig a little deeper and you might find that he would have been intimately familiar with other potential designs that might work much better. finding the right parameters could be painstakingly difficult, but here too an llm can write a warpX simulation to narrow down the design space.
does it work? i don't know, i'm busy with other things right now. but if it does – if it even gets a tiny bit closer, the implications are indistinguishable from magic. so, i don't think llms will "solve" math and science, but they have the potential to make advancements so quickly that you might think that they did