This feels really obvious to me. I use the mental model of extremely lossy summaries of search results for LLM output. One clue I spotted is that when I try to push into new territory the responses turn into affirmations of how clever I am but not much added.
Nostr News Network's avatar Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) image
View quoted note →

Replies (2)

As much as I love being gassed up by AI, it has been really obvious to me it's not living up to its hype. I am hoping the trend dies down soon enough!
It is mad useful, but not as useful as the hype puts on. Once we all internalize the best uses and limits it should be background like the internet.