All frontier models are great. It's not that you depend on it, it's about being able to switch. That's why I prefer opencode to Claude Code/ codex.
64GB VRAM is shit for inference. You can run hw attested end to end encrypted inference in cloud.
Login to reply
Replies (1)
> You can run hw attested end to end encrypted inference in cloud.
That's ... not really an option for sensitive data. It's very naive to trust the promise of those providers. Yeah, you can maybe get a secure channel to their TEE but TEEs in some far away data center can be compromised without you having any chance to ever learn about. TEE providers are notoriously secretive about their vulnerabilities and rely a lot on security by obscurity. And reading out that RAM was done before.