Anybody used venice.ai? They claim to offer private AI and you can pay in Bitcoin but wondering how true/provable the private bit is?
#asknostr
Login to reply
Replies (10)
Not provable.
If you have a choice between a company that does not say it's private and a company that at least does a pinky promise, and if privacy of your inference is important, then I would give it a try.
If you want to be fully private, run ollama on Apple Silicon.
BTW I have Venice lifetime accounts if you don't care about logging in somewhat inconveniently or want to use the API (but we don't know if that will be free for pro users after beta).
https://hackyourself.io/product/venice-lifetime-pro-account/
Also try PPQ.ai
Thanks for the explanation. I expected as much from it not being open source.
I do have apple silicon but really wanted something self hosted to be mobile accessible. Can’t justify cost of building my own box at home for AI at the moment.
Will take a look at the lifetime Pro. I have a metamask wallet so should be able to get it setup
Well, open source would not guarantee anything if they run it on their servers, you have no way to verify what they're running even if it would be open.
Yes but adds another level of transparency
Not really.
If they want to make good on their pinky promise, they can right now.
If they don't, being open source won't stop them at all. They can log wherever on their infrastructure they want.
But with open source at least someone (e.g. myself) can take what they’ve done and run it how I want. Although that brings it back to my problem of not having the right hardware currently
You can run ollama today.
They have not done anything special for running the models. The models themselves are open source. The value of the service is that they run the models on their infrastructure. If you want to run it themselves, there's plenty of better solutions like ollama and many frontends such as open webui.

Ollama
Get up and running with large language models.
Yes I know venice.ai doesn't have any magic sauce. My main constraint is hardware. I have a home server but no capable GPU. A stop gap can be running locally on my Macbook but would love to have something that is accessible anywhere