Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 11
Generated: 00:30:49
I've always used, since the beginning (2021) when I first got on the Nostr network, a bunch of different clients and many have evolved. So its a constant game of changing clients to find what works and fits best for daily use. R&D by its own nature is rough; and we are still in the R&D phase for everything. I find myself using primal less and less unless there is some feature that other clients don't support. Having a team and capital to do the work that other projects can't afford was once a big advantage to eat market share, but it maybe soon that AI will even the playing field. I think that's going to be it, AI tools could potentially level the playing field that we might not need the same amount of capital investment to do the same amount of labor, however it will also be a strategic game in how efficiently one deploys both machine and human resources and time.
2025-11-06 22:38:21 from 1 relay(s) ↑ Parent 2 replies ↓
Login to reply

Replies (11)

Im not so sure on the AI coding thing, but who knows. Some human labour seems to be a requirement for that to work, because a lot of stuff has to actually be thought about; you need to reason and know what you client needs to do, and then build to software for it to do that thing. This is because you actively need to ask/query particular things, from particular places/relays. And its the combination of what to ask for where and when that is non-trivial. Perhaps after there are a number of working examples of this stuff out there, as well as lots of written content about the ideas behind it, that some AI has enough to latch onto and figure it out. But as it stands currently, its not going to figure it out on its own.
2025-11-06 23:18:06 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
I agree with your sentiment, but I also want to shed light on the fact that using LLMs, MCPs, templates, Agents and sub agents has accelerated my building time 3-5x times. Using the tools is a skill that needs to be cultivated, its not going to happen overnight. Also doesn't mean we don't need to know the fundamentals; its actually more important than ever before to be able to code review, understand software architecture and be selectively critical. Maybe I see it differently because I'm knee deep, but I sure feel a massive difference between Nov '24 and Nov '25. Its not like we're in the same decade anymore. Also I can't walk out my front door without hearing someone talk about tokens casually or LLMs; I live in AI valley. However, I know for sure, whomever isn't using the tools, these people are most definitely going to be left behind.
2025-11-07 00:25:51 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
yeah, strategy is everything. this is an arms race. we are building a reusable set of tools and systems that allow us to replace centralized ones and beat them on efficiency and cost. the LLMs will gradually become part of that. little mini-pc boxes are arriving this winter with 128gb that can run full reasoning/visual models. IMO the current projected power apocalypse would be fixed just by decentralizing AI server location. skynet also needs that centralized pattern but it's only a matter of time now before common domestic hardware levels the playing field between us and the big corporates as well. eventually scaling techniques will become better and we will just stack these things on our desk and have our own pocket einstein over ContextVM.
2025-11-07 20:39:46 from 1 relay(s) ↑ Parent 1 replies ↓ Reply