I think no one has tried to write an antispam relay policy that just excludes notes from people that have been muted by others in the extended network of someone. It should be a pretty seamless addition to any of the current very public relays.
Login to reply
Replies (44)
Censorship?
i think its dangerous to put that kind of power into the hands of idiots
The user could pass to the relay the list of people he doesn't want to hear from. Could be it's own list or a public list, or a set of keywords
Do you think WoT can be determined by algos (like this) alone … where “is_trusted” is ONLY IMPLICITLY derived from user inputs (follow, mute, kind0 data, ect…) ?
OR do you think “is trusted” should ALSO BE AN EXPLICIT option for end users?
I ask because (I see) WoT to be so damn important for nostr to scale (without central moderation) beyond the coming flood of bots and bad actors.
I think end users would want (to be able to have… depending on their client settings) to have the last word on whether a followed account “is trusted” or not.
Thoughts?
Would be great if Mute was totally separate from Report Spam.
I have muted many accounts from the #mostr bridge as I already get notificatios from them in *other apps*...
How would you know what is real user vs dummy user to determine this? It should not be hard to spin 1k dummy users that mute anyone...
@hodlbod 💜 may be interested to chime in…
Except that I suspect people are using the mute lists clean up their feed.
Instead of unfollowing the person, they keep following them but mute them.
Yes. Flagging someone as spam on a relay based on who has simply muted them is a terrible idea.
Why? If 10 people follow them but 2 mute then they are still good. If 3 mute and just one follows then they're not good.
We're not trying to make a global net of censorship here anyway, it's just a single relay policy.
Primarily implicit, but things like "I trust this person to curate this kind of content for me" could be useful to add additional weighting to WoT calculation. But it has to be something people will bother to maintain.
Clearly you never had to deal with bot farms before.
One can coordinate multiple accounts to target one particular address to mute it or "report for spamming".
coracle has a good WoT algo that solves that problem. could just apply the algo to a relay.
Because people mute others for a variety of reasons. It’s kind of the only tool in the toolbox in some ways.
So I don’t only mute actual spam from bots, I also mute bullies, accounts that post or repost nsfw without tags, and, on rare occasions, accounts that post good stuff but just way too much of it.
I’m sure that all of that except actual spam from bots is sought after by other people, and shouldn’t be counted as spam.
Proof of this: I regularly see people I follow in conversations with people I’ve muted.
It is a bad idea because "mute" is not necessarily used punitively by users. If I mute someone who is extremely active in my feed, it does not necessarily mean I consider them spam. It means that I don't want to see what they're posting at that moment. I frequently do this, and then go back and unmute them later, as a means of managing my feed. If my mute suddenly counts as one-of-x required marks toward someone being considered spam by such a relay, then my mute suddenly becomes punitive instead of the way I intended it. Amethyst behaves similarly to this and it is one reason I would never use it.
its like when you see down votes on the most epic music on yt. its because its popped up on someones autoplay feed and they don't want to listen to it right now, not that its a bad song
Bot farms have become pretty sophisticated and good at disguising fake users as real ones. And with the generative AI, they can even fill their timeline with non-repetitive normal-looking posts until the day X.
Although the big (govt-owned) bot farms usually hire real people and pay them several cents per post.
It’s still better than nothing. Try logging in with my npub on Coracle and see how many possible bots you find in my feed.
Hodlbod, I recognize your concerns about maintaining “webs of trust”, and for most people this just “needs to work” without maintenance.
However, I don’t see a realistic (usable and trusted) WoT solution without an affordance for the end user to be the final arbiter in every instance of “is this a trusted follow”.
I imagine something like an “is_trusted” flag could be added to kind3 p tags… (after petname) AND that WoT lists COULD be determined algorithmically without this flag being set. Having the flag set “true” simply would override the WoT algo on a case by case… they could even feed back to the users own WoT algo as a “bump up” for a follow that “is_trusted” by one of the users “is_trusted” follows.
WoT is a heuristic, it doesn't need to be exhaustive. One way to state the goal is to remove 98% of what the user doesn't want to see, and produce recommendations that are more than 10% likely to be relevant. Implicit web of trust is enough for that. Explicit additions may be a useful addition, but probably won't ever result in more than a 1% improvement (still huge) over either metric above.
Now that is nice! I’d love to see those numbers.
But there’s more to consider in a real world “trusted” WoT implementation (at scale, on a network whose new user retention will entirely depend on them being “included” at day one into a “trusted” network) than a quantified difference between algo derived WoT lists and “human in charge” WoT lists.
Because WoT will likely end up first in line during new user orientation, and talked about constantly (Custom content filters are already a claim to fame for bluesky, and Nostr’s will be even better.) the tools for implementing WoT will need to also be front and center. Easy to understand, discover, and use.
IMHO, giving people an “is trusted” checkbox for their follows, and saying “this controls your web of trust” will be the ideal on-ramp for getting them to the understand that (client configurable) content filters are ALSO working behind the scenes suggesting their “trustworthy” sources.
What I’m pointing out is a UX flow for getting people to make use of WoT, so that nostr can survive the long haul. This is the “other thing to consider”.
Doesn't matter, you are just one person. Your actions may even be totally random, they will end being balanced by others' actions.
What is this philosophizing about YouTube downvotes here?
I'm not claiming originality!, just throwing out the idea.
We have a variation of this working on device in nos now. If someone reports a user for spam, and they have Use reports from my follows turned on, then the spam doesn’t show up in the feed of their friends.
However, you may not always agree with your friends, so you need a way to say yes I want to pay attention to your mutes etc but not Jose’s mutes.
So then we need to build an interface for managing that
im too midcurve for that, this isnt philosophising, was trying to summarise @corndalorian note 👆
isn't PoW supposed to be feature to prevent spam, why aren't we using that more?
“Wouldn’t” is the term, cause fiatjaf is only proposing code… and I don’t know how he (or you) wd implement, without a NIP as guidance … AND relays and client should be free to implement “opt in” filters in all different ways.
However, the more important point to make is that algos will never replace human judgement. They may suggest, and even stand in when asked, but they will always fall short.
So, in the end, it doesn’t matter what the filter rule “should do”, because filters “should” be transparent for end users to choose and configure.
Pick a different one, or none at all, to feed the content you desire.
Anything can be tried, and there is no reason not to try this. But I still think it's a bad idea. Suppose it became a very popular relay, what would prevent a few popular and influential accounts from colluding to ensure someone they don't like can't post to that relay? I suppose it would depend on the policies that govern the censorship. My concern is that it creates a "mob rule" situation where others could be unfairly silenced.
And having such a relay policy in the first place assumes that the reason for muting someone is always a negative thing. It is not.
What if a mute doesn't count if you still follow the person? Would that mitigate the issue?
What is the problem of having a very popular relay that bans people for any reason whatsoever? Nostr is supposed to work under exactly these conditions.
Because in this case the relays "decision" to ban them is based on the arbitrary actions of other users. There is a difference in a relay having arbitrary rules (such as "no using the word avocado on this relay") vs. a relay allowing the arbitrary actions of other users to dictate what/who is and isn't allowable. Maybe there is some more specialized use case for such a relay, and so I don't think it shouldn't be tried. But I don't think it is a good idea just for general use.
@fiatjaf is right. Go for it
@corndalorian be baseless fear mongering
It would be better, but I still don't like using "mute" as the criteria because mute does not equal spam.
lol I am going to mute you now so you will have a negative mark against you when he creates this relay.
so what if lots of people mute
then you might have mute-bots to silence someone
so let's take some sort of weighted graph of follows
great, we've re-invented the algo but with censorship as the goal
perhaps freedom is the answer
let people host whatever they want to host and mute whoever they want to mute
Don't use the relay if you don't like it.
Do it if you have balls.
What if I make a relay that considers a person more of a spammer if they are being followed by more people?
you have a very special way of saying absolutely nothing with a lot of words
Sorry… god made me this way.