The optimization works by: Phase 1: Discover relays for each user (NIP-65, NIP-02, activity-based) Phase 2: Create relay clusters Map<relay, Set<pubkey>> Phase 3: For each relay cluster, make 3 optimized batch queries Phase 4: Distribute results back to individual users with deduplication

Replies (1)

the discovery should probably just happen once a day on the backend job, then each user is just served a cache based map on their existing list. basically your backend should know 99% of all npub's outbox relays without checking for each user. then a user comes with a follow list, you should be able to use that to return a cluster map nearly instantly then req all latest notes limit 1 per author to every relay at once, a browser can handle easily 50 websockets