There's ye olde argument that the economics of electricity and chips means duplication of labour vis a vis crawling and indexing must be aggressively minimised for a wider solution to achieve any sort of longterm viability. If you've got dozens of Nostr clients all individually crawling and indexing the same relays (as the basis for each client's 'pick-your-own-algo' feature-slash-unfortunate necessity) then it represents quite some potential heat-loss overall. Friendly sharing can help, but, outside of the right incentive structure, might be hard to extend beyond early days. An interesting takes on that challenge here:

Replies (1)

I will have a look. The way i see things now is that each individual client (nor user using multiples of clients for that manner) wont have to perform such exercises over and over again each time. Running such an operation should result in a product (simply put a list of events), which can then be used by others. Also, these operations can vary in terms of debth and width, adjusting to usecase irt available compute and bandwidth. At @npub18zsu...8aap we call this type of operation 'pulse'; a ripple through the mess of events out there guided by a construct of biasses on npubs and lists. In any event, i gues my main argument would be that computational efficiency is irrelevent because due to spam, data curration (signal/noise diffirentiation) will be the #1 challenge and i'd argue to only way to tackle that is in in a distributed manner (i.e. relying on a network and networks of networks of people applying sensemaking for themselves). Any walled garden will either be too limited or run over by weeds with nothing in between.