You sound like Chatty without the bullet points.
I sacked him too 😂
You could argue that no LLM reasons, or that a diffusion model reasons.
Reasoning is a human abstract term that we don't understand, we don't know we don't understand it because we take it for granted.
We have to use human terms to describe non human systems. This bases them into an existing model we hold. We then expand that model using the reality that it connects to. Only over time do we realise it had no relation to the understanding we started with.
I choose understanding over technical correctness, because technical correctness is a destination we will approach, but never arrive at. Understanding is a binary state.
Login to reply
Replies (1)
“Reasoning” in this case is being able to process the document to make extrapolations like linking 2026 oil prices <=> Iran war. An LLM with token-by-token generation + tools + CoT can do this pretty well, but embedding models are intentionally trained to classify similarity and not make extrapolations like this.
So it is up to either the document writer, or the querier, to stuff the relevant keywords in the embedded text