ARVIN's avatar
ARVIN 1 month ago
@PayPerQ and @Maple AI Although the user prompt input through your systems are private, once the user prompt is fed to one of the many available LLM’s you have, those LLM’s retain whatever they want, is that right? Or do they not retain them because you’re able to configure it for privacy through the API connection? In my own case using your services, I assume the LLM’s are retaining my prompts, even if for a short time. Therefore I try to be cognizant to not include info. in the prompts that are “sensitive”, and should be private.

Replies (8)

No offense or something to anyone but if you use any ai service that you don't host yourself, consider there's a chance your stuff gets leaked/stored/forwarded whatsoever, be it on purpose or not. If a provider says that's not the case its the same as saying "trust me bro". Self host. Just as with Bitcoin.
PPQ and Maple are quite different with respect to privacy. I'll do my best to explain here, and they are welcome to follow on if they like. When you run a query through PPQ, it goes through our servers and then on to the final provider. We are promising you that we are not logging or maintaining the content of your queries, but we are not proving it. As far as the providers go, almost all providers now claim that they are not training on the content of the queries. But I always have a feeling that there's some tricky things that they're doing. So yeah, I would always be cognizant of what to plug in. As far as Maple AI goes, they are using what are called Trusted Execution Environments with their own self-deployed open source AI models. As far as I understand, Maple is encrypting the content of your queries on the client side and then sending it to a sort of black box AI model that then performs operations on that content, generates a response, encrypts it, and then sends it back to you on the client side. When doing it this way, Maple is able to actually prove to some degree that not even they have access to the content of the queries. And because they are using open source deployments of various models, the content of your queries is not going to big AI companies at all, unlike with PPQ. So if you really, really value privacy, it's a good idea to use Maple AI, but if you prefer the highest quality models, but with only "pretty good privacy", then PPQ is probably a good choice. For what it's worth, PPQ does have plans to add Maple AI models into its offering, so PPQ users will be able to use Maple AI models without having to have a Maple AI subscription.
ARVIN's avatar
ARVIN 1 month ago
Just assume you’re always being monitored.
Benking's avatar
Benking 1 month ago
Exactly. Even though many LLM providers implement privacy measures and some allow you to configure data retention settings via API, it’s safest to operate under the assumption that prompts could be temporarily stored or logged for model optimization, debugging, or monitoring purposes. Avoiding sensitive or personally identifiable information in prompts is a good habit, because once a prompt is fed into a model, there’s always a non-zero chance it could be retained, even briefly. Treat every input as potentially ephemeral but not guaranteed fully private.