Melvin Carvalho's avatar
Melvin Carvalho
_@melvincarvalho.com
npub1melv...5c24
Mathematician and Web Developer
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Scaling Nostr -- by Sondre Bjellas: "What I want to show here, is that without Nostr users having to run a new client, or publish a new event for an DID Document, this DID Document is constructed from existing data on Nostr relays. The magic happens in the bootstrapping, which is all about kind:10002." Read more:
Melvin Carvalho's avatar
melvincarvalho 10 months ago
First sign of independent developers implementing #didnostr ... love to see it! image
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Using this already, but: "NoSTR" is not a standard term or acronym, I'll assume it's not a widely adopted or recognized architectural style. If you could provide more context or clarify what "NoSTR" refers to, I'd be happy to try and help with a comparison. View quoted note →
Melvin Carvalho's avatar
melvincarvalho 10 months ago
At this point it is possible to build an open "super app", with better AI, about 100x-1000x cheaper than Space Karen Musk. Would probably need startup level investment and 9 out of 10 would fail, but the 1 that makes it could be a game-changer.
Melvin Carvalho's avatar
melvincarvalho 10 months ago
deepseek = best model, good, cheap llama = boomer model, good, cheap claude = coders with bloat, good, inexpensive xai = karen tech, expensive, privacy invasive openai = office tech, very expensive, privacy invasive gemini = google tech, good, price unkown, privacy invasive I would stick with deepseek and llama for Open projects, and some claude for coding. Of course, everything will change in a month.
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Open Source llama 4 model is now 100x-300x cheaper than the commercial commercial closed offerings, for equal, if not better quality. And deepseek R2 will drop this month. The future is open!
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Open-Source Llama 4 cheaper than deepseek? Big if true. image
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Maverick and Goose! Llama 4 Maverick: - 17B active parameters, 128 experts, 400B total. - 1M token context window. - Not single-GPU; runs on one H100 DGX host or can be distributed for greater efficiency. - Outperforms GPT-4o and Gemini 2.0 Flash on coding, reasoning, and multilingual tests at a competitive cost. - Maintains strong image understanding and grounded reasoning ability.
Melvin Carvalho's avatar
melvincarvalho 10 months ago
Midjourney v7 short, Mercs by the incredible Dave Clark, using the excellent nostr.build