Replies (10)

Yeah it’s all about trade-offs. We get as close to 100% as possible with cryptographic proofs
duck.ai asks the user to trust that they don’t log. Their servers see all of the AI requests and user information. They promise to remove the information before sharing it with other AI providers. Maple user confidential computing and open source which lets users verify their privacy by seeing the code running on the servers.
The LLMs are open source models running in secure enclaves on the GPU. They follow the same attestation process so users can verify privacy. None of the data is ever shared with the creators of the models because they are run in an isolated environment.
We use the weights populated. We’ve discussed training models in the future but are not there yet
↑