I'm not talking about attacking existing providers or me.
With trusted LLMs, the honey pot would be Maple and Venice etc. How can I know if these can be trusted? Yeah, they claim they run the LLM in a TEE and maybe they even do in a provable way but a state actor would have no problem in doing so while still reading all the prompts of all its paying users.
So people are willing to pay a premium for extra privacy, so there are companies offering this but the honey pots have a separate source of funding that is the CIA. They can offer a legit business at a legit price and look legit through and through but all the prompts end up in CIA's data centers. My vibe coding might just be noise on their end. By catch if you will.
Login to reply
Replies (1)
If they break the attestation, they would do it for something of value. Inference is much lower value than for example secure communications providers making use of the tech. It's extremely difficult to keep this secret.
Breaking attestation is extremely valuable, by running an AI provider as a front and exploiting their own infrastructure, they would get access to what is mostly porn role playing and retarded chat. Wasting such an exploit would be stupid.
I'm not saying they are not stupid, but it's very unlikely they would waste an exploit on this.