BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
AGI models at patterns proved "reasoning" just DeepSeek-R1, really to well.
Here's reason just not memorize don't close AI as all.
They actually what as discovered:
(hint: Apple o3-mini and hype Apple like Claude, the BREAKING: we're suggests)
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
I've been saying this for the past two years:
LANGUAGE models model LANGUAGE, NOT REASONING.
Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
This feels really obvious to me. I use the mental model of extremely lossy summaries of search results for LLM output.
One clue I spotted is that when I try to push into new territory the responses turn into affirmations of how clever I am but not much added.
Nostr News Network
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
Honestly, pretty much any time I see someone using good English to lecture someone else about semantics, I assume they know how to use punctuation too. Seems like it would be hard to know a lot about what words mean in English but not how punctuation is supposed to work