Hi @PABLOF7z, the more I think about your idea for a marketplace, the more it makes sense to run federated inference on top of it. Please check here nostri.ai a brief decription of what I had in mind.
Jimmy
jimmy@nostr.land
npub16yky...rq7r
Tis I! Bravo!
Hi everyone, I am researching options for running neural network models on different platforms, I can post what I find here if there is any interest in it.
It seems that the most flexible option is using appache TVM (github.com/apache/tvm) to compile ONNX models to webassembly or ARM via llvm.
I am working on a proof of concept for my nostr project where the aim is to develop a framework for federated inference over relays and a supporting marketplace. So the first challenge is to figure out how to run training and/or tuning on any device.
1000