BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) image

Replies (14)

Default avatar
Stvu 8 months ago
Obvi. They already said this Read GSM Sumbolic
If this is news to you, then you are the "AI"
Nostr News Network's avatar Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) image
View quoted note →
I've been saying this for the past two years: LANGUAGE models model LANGUAGE, NOT REASONING.
Nostr News Network's avatar Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) image
View quoted note →
This feels really obvious to me. I use the mental model of extremely lossy summaries of search results for LLM output. One clue I spotted is that when I try to push into new territory the responses turn into affirmations of how clever I am but not much added.
Nostr News Network's avatar Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) image
View quoted note →
WildBill's avatar
WildBill 8 months ago
Haha just Google the paper title
Honestly, pretty much any time I see someone using good English to lecture someone else about semantics, I assume they know how to use punctuation too. Seems like it would be hard to know a lot about what words mean in English but not how punctuation is supposed to work