I know people are going to fight me on this but they are wrong. It’s like going around jamming a mesh network signal and drowning it with noise. Is it censorship to stop that signal? No, the jamming is disrupting the network.
I almost never suggest this kind of thing at the relay level but you *can’t* do this kind of filtering at the client level, since IPs are not exposed to clients. If we’re pumping megabytes of spam to clients for filtering that is not really ideal either.
Again, this is just a *public* relay issue, a fight I’m not willing to give up yet since you shouldn’t need to pay or KYC to speak View quoted note →
Login to reply
Replies (59)
Exactly, they can use their own or ones filled with spam
IP level protection is counter-productive outside of rate-limits, has to be native reputation
Email providers 20 years ago would be judged based on the quality of their spam filters when attracting customers. In a way they still are, if they were bad you wouldn’t use the service.
how so, its the only thing that can stop the current spam on my relay. What is native reputation? I am not doing WoT on my relay, that would block new people, and its not the role of a public relay to do that.
I look at this the same way I look at Bitcoin miners having the right to choose their block templates to filter out Ordinals. They are investing their capital (time and money) and have the right to manage it as they see fit.
If a spammer does not like their content being filtered, they are free to start their own relay.
In both situations, everyone is FREE to choose to do what they want. The most important thing is the protocol remains open to ALL to grant that freedom. That's the permissionless special sauce.
Your relay your rules.
Its pretty bad damus doesn’t have an automatic switch for this, at least for turning off reading from public relays. Right now you have to remove them completely which is dumb
You ARE wrong.
Censorship is WRONG. That is why Nostr was created.
STOP IT
Why not a NIP that would allow relays to ask for a crypto captcha solution before they accept a message?
Something like https://github.com/mCaptcha. That would make it expensive to run spam bots, without annoying normal users.
Yeah
Focus on relays would be a nice touch for Damus.
I’d like better control over NIP05 followings but more could be achieved faster with relay filtering.
Thats filtering up there not censorship
Are you really prepared the captcha selection of semaphores 🚥🚦and busses 🚌🚍 each time you wanna post something..nope not me.
@Giacomo Zucco had a great talk on this in vegas in the context of bitcoin. You need to filter or else your system will be overrun by people trying to bring it down
I think people get triggered by the word without thinking hard about what is actually being said and what properties of the system do we actually want to have.
its easy for spammers to get a nip05 :( they’ve done it in the past
A rose by any other name...censorship is censorship. Calling it "filtering" is silly...
If you run an app for kids build on nostr, censorship is more than welcome.
Use a relay which is not censored 🤷♀️ I really don't see a problem here.
Let me guess, bcasher
yeah the most important thing is making sure clients give user control over all the knobs so users can associate with whatever relays they want. This is why I always want to have manual mode in addition to relay autopilot.
The Bitcoin protocol is censorship resistant, but users can choose to use Coinbase which requires KYC and sets limits on transactions.
The Nostr protocol is censorship resistant, but users can choose to interact with relays or clients which filter in a way which improves their experience.
You and @Vitor Pamplona have now said this.
So how?
If they’ve got to use their own domains then that’s friction and cost.
If they want to use a provider then it’s on the provider to filter rather than the client.
there are many free providers, and if you ban one you ban everyone else on that domain
i worry it would be misused by people not realizing they are banning thousands of people at once
Why the fuck should I go through notes and notes of spam? Just because you are free to write it and have relays that will transmit it, it doesn’t mean I need to see it. The beaty of nostr is that users can filter the stuff they don’t want to see.
If someone isn’t behaving on nostr, they go to shadow realm. If someone is wasting my time I will never see their notes again. Shadow (Mute) Realm on nostr is unforgiving place.
And users are thankful for that Will 🙏🏻
Ok.
Force Users into paid services (from 1 Sat) or allow Users to ban whole service providers is the choice.
I’ll take blocking whole service providers 100/100 times. 1 Sat is friction but when we’re left with at best WoT, then 1 Sat is nothing.
If the service provider can’t filter npubs then what the fuck are they charging Sats for?! Useless providers get 0 Sats rather than 1.
You don’t have to control everything Will. It’s ok to palm off certain functionality.
Right now your disallowing me from banning any NIP05 providers by not giving me this filtering which Id much rather have than not.
I guess you didnt look at the link I posted. Crypto puzzles are solved automatically by the app without human involvement, no semaphores and no busses.
Agree COMPLETELY. We need simple tools so that INDIVIDUALS can make their own choices.
Look at it this way--if all USERS simply block spam on their own by muting, then we will (over time) extinguish spam.
The solution is to make better tools for INDIVIDUALS to use, and to not censor at a global (relay) level.
Calling everything censorship is confusing and a waste of time. Using the term in its broadest sense leaves so much room for confusion that you'd be better off using a different term. And that may just be the case at this point.
For example:
Government putting you in jail for saying you hate war is censorship.
X removing child pornography is censorship, in the broadest sense. But it's also an illegal and immoral material.
Let's say someone on X posts a video detailing how much they hate some particular race and want them to all die, for example. This isn't necessarily illegal in the US without some threat, but that doesn't mean that it isn't a violation of the service's terms of use or moral standards. Removing it is censorship in a broad sense, but it is their right and I'd guess a large portion of X users and customers would prefer to not associate with such content.
I prefer to use censorship in the context of force, such as government force against pure speech or thought. The lines are a lot more clear. Private property vs Government force.
Nostr clients and relays should absolutely be able to filter certain objectionable content. I would not use Nostr if I had to be subjected to whatever anyone anywhere wanted to post. They can run their own relay and client if they wish, but that's the freedom aspect to me. I don't value the freedom to subject anyone to anything at any time. That isn't what free speech even means., and I don't think telling someone to simply not use Nostr is a strategic response. That's a great way to go nowhere fast and end up associated with something that becomes known as a place for child pornography creators and similar to hangout. Good luck with that.
What gets filtered will always be debated, and my opinion is that Nostr is beautiful because we can all choose what level of filtration is right for us. I think this issue will be a bigger focus as more people join. It has to be. I've already seen things using certain clients that were annoying, and other things that were absolutely disgusting and immoral. I would have already dipped if that was the case no matter what I did as a user.
Some baseline safety is going to be required for most people to comfortably use Nostr, and that means there will always be some 'censorship' involved. It isn't always bad, and having it as an option isn't a bad thing either. What is bad is having no control as a user one way or the other, which is what I think Nostr will actually solve.
View quoted note →
This is not for you to worry about.
By all means document your concern. If the change fucks up you are covered.
But I don’t use Damus because of your thought processes on what filters are best. I didn’t even know about some filters until 2 days ago so how are you the best person to control this when you can’t even distribute info to your paid users like me?
If I want to block a NIP05 wholesale then that’s on me.
You disallowing this because you think you know better when you can’t convey functionality to your paid users is pretty stupid mate.
The design of apps greatly affects the overall level of censorship in the network. Clients have lots of power here, so design has to be a consideration unless you want everyone banning everyone clientside.
I think nip05 banning is pretty dubious and not even reliable, it creates worse effects than it solves
At the click of a button damus users could ban all primal users, if you want that then you might as well use mastodon.
This should be obvious but I’m not spending my time on Nostr so you have more ability to censor the network on my behalf.
My ideal is you have zero censorship capability and I have 100% censorship capability *within my own feed*.
So yes; I want everyone banning everyone clientside. I want that clearly and directly, not algorithmically.
I’d rather someone accidentally censor me directly than them leave it up to a client dev to interpret their intention.
Why are you interfering with censorship? I want to control censorship - I never chose to give you that power. I don’t care what your opinions of my own censorship list are, that’s my list and you are irrelevant to it.
IP filters will eventually block new people as well, those using popular VPN's, ISP level IP4 gateways, Tor exit relays, college campuses, the list goes on... attackers can unfortunately get their hands on IPs just as easily as they can generate new keys
Soft forms of WoT via invite links or PoW seem more promising
WoT on relays is way worse. IP blocks would be temporary for the duration of the attack
Oh excuse my ignorance 🙂 I did check it but the pow can easily be manipulated if set to easy. If set to too difficult, we all have a problem with posting.
What this is doing is just randomly guessing numbers and when hitting one with enough 0s then it's a pass and this can easily be manipulated.
That would be interesting 🤔
ok, but how am I ever supposed to mute the entire npub spam army? There seem to be many spam accounts and there seem to be more each day. Doesn’t client side “mute” seem kind of futile? Spammers can spam at scale, but how do I mute at scale?
Yea thats fine for short-term rate-limiting defense against DoS, but doing anything reputational with IP's beyond that will hurt new users even worse in the long run
Tiering works well in many environments, for example you can still allow new users with no friction but also impose a basic rate limit (or kind restrictions), with upgrades to those permissions achievable either via a PoW flow, WoT-invite flow, or just good behavior heuristics over time
Make the best product you can, everything else is secondary…
I think the trick might be to scale the pow based on the amount of notes coming from an ip 🤔
yeah its not a bad idea
Ok pedophile
That's exactly right. And it's me, not anybody else, who decides what is spam and who goes to the shadow realm. With tools on my client that I control, not anywhere else under someone else's control.
Spam isn't speech, speech is order, spam is chaos. Allow free speech, not spam.
I think Nostr was created to be censorship-resistant. That is to say, no matter how much a relay or client may choose to censor content, users are not limited by those censorship choices. Even if all relays and/or all clients chose to censor particular content, anyone is free to spin up their own relay and/or client instances to circumvent those censorship choices of others.
i.e. I think Nostr was created to render the value judgement of censorship moot.
Aye aye
The devil is in the details. How "wrong" censoring spam is really depends on how it done. How much decentralization was given up in the process? How many legitimate users were silenced inadvertently? This is a delicate issue, if it's to be handled optimally.
Oh, no argument--it's a VERY complex issue--and "filtering" is one solution--
Yet it relies on counting on relay operators to then not censor other things...and that's when it gets sticky...
It's really the exact problem Facebook, Twitter, etc. are faced with--how much censorship is "ok"? And who gets to decide?
Yes, anyone CAN spin up their own relay, but do we really want to make that our freedom of speech proposition?
"Hey folks, Nostr is great--but you have to spin up your own relay to make it work if you don't want to be censored"
One of our greatest (current) problems is easy of use...I'd say going down the "spin up your own relay" isn't really our best answer...
Exactly my point--it's not easy--so we (users) need better tools from the developers to enable user moderation of content (i.e., change the channel)
That's where development efforts shoudl be focused--enabling the USER to control (and choose) their own content, in and easy, simple, expedient way.
And yes, censoring content at the relay level is much easier to implement--yet my concern is that it takes us in the wrong direction (i.e., centralization vs. decentralization).
We're better than that--devs, please take up the challenge--Nostr is AMAZING--let's build the tools that will continue to inspire and yet stay true to the core reason Nostr exists--decentralized communication and freedom of speech!
Hmmm...that certainly furthers the conversation...
Don't conflate "CAN spin up their own relay" with "HAVE TO spin up your own relay". Again, only "if all relays" censored you would you be required to spin up your own. That means it would only take one like-minded individual as yourself to spin up their own censorship-free relay so that you (and all other like-minded individuals) wouldn't have to. At worst, this would-be issue presents opportunity (either to you or to anyone else) to cater to you and like-minded individuals.
Yes, I do get it...yet I still believe (STRONGLY) that we need to develop tools that let USERS control and filter their own content--and to not rely on someone else (e.g., relay operators) to do it.
Frankly, the functionality to "focus" a user's feed is really missing from Nostr currently...developing such a framework would help to solve both problems.
Decentralization is the primary core tenet of Nostr--and any "filtering" should be decentralized as well.
"we need to develop tools that let USERS control and filter their own content"
This is where we agree. Ultimately, users ought to be able choose a stream of content to their liking (i.e. choose a set of relays) AND have the tools available to further curate that stream to suit their preferences with reasonable ease. I see no reason to assume that work isn't being done to eventually achieve this.
gotcha, well I guess the challenge is doing that on the client. I don’t see how it’s possible yet. But nostr still works, we can all chill out and live with the spam while we discuss solutions. I hope devs don’t view spam as a failure and feel they have to rush in any fix
Agree--it won't be easy--but--if we *can* put content control in the hands of the users we'll have taken a GIANT LEAP forward with Nostr--
People ask "why not just use Twitter, or Mastodon or..." and "when will Nostr adoption reach a critical mass?"
Well, user-controlled content is the "killer app" that will answer both questions...