This feels really obvious to me. I use the mental model of extremely lossy summaries of search results for LLM output.
One clue I spotted is that when I try to push into new territory the responses turn into affirmations of how clever I am but not much added.
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)

View quoted note →
