My biggest concern is how we will solve an active synchronization of blobs. Currently it is user centric, i.e. a user or an app on behalf of a user can upload and duplicate blobs on multiple servers. - 1. With current API blobs can be listed on a user basis. Compared to a nostr relay, where I anyone can get a feed of events and publish them to any relay, the blossom server doesn't have a feed of new blobs. It would be hard to duplicate content without a user (npub) focus. (still wondering if we need that). - 2. I think we might need a way to check if a blob is on a server without downloading it and without doing a lookup on upstream servers. I'm thinking it may be useful to add HEAD /<sha256> to the spec for that. - 3. We might need to solve a circular dependency problem in how the "download" from upstream servers work. If we have blossom servers pointing to each other in a circular way, pulling blobs from an upstream server might end up in an endless loop.

Replies (1)

1. The focus on the pubkeys is the key here I think, without it the servers are left with a pile of miscellaneous blobs which would be difficult to synchronize. however with the pubkey the server can categorize the blobs under the pubkeys and synchronize the pubkeys and not the whole server. Either way though I've not sure how much servers should be synchronizing if at all 2. Just added the requirement for the HEAD /<sha256> endpoint, easy win https://github.com/hzrd149/blossom/blob/master/Server.md#head-sha256---has-blob 3. The HEAD /<sha256> endpoint could probably be a good solution to this. it would allow a client to check if the blob existed before requesting it. either way though the concept of "upstream servers" is only implemented in my blossom-server and I don't expect every implementation to have it