Embedding models don’t use an LLM but a dedicated architecture.
And it’s not an LLM looking at a documents, but your query + documents being converted into a number vector that is compared.
So, it can’t “reason” about the content of documents compared to your query, and the current context.
Also, OAI embeddings are not best anymore, many open source variants perform better for a lot of tasks.
Login to reply
Replies (1)
You sound like Chatty without the bullet points.
I sacked him too 😂
You could argue that no LLM reasons, or that a diffusion model reasons.
Reasoning is a human abstract term that we don't understand, we don't know we don't understand it because we take it for granted.
We have to use human terms to describe non human systems. This bases them into an existing model we hold. We then expand that model using the reality that it connects to. Only over time do we realise it had no relation to the understanding we started with.
I choose understanding over technical correctness, because technical correctness is a destination we will approach, but never arrive at. Understanding is a binary state.