We’re excited to release a new version of @nos with a bunch of bug fixes and the biggest thing is a reworking of the much maligned @Reportinator with a new generation bot we’re calling @Tagr-bot. Users of Nos or anybody can now send an encrypted report to Tagr and it’ll read it, check the content against a moderation AI and also ask the Nos team to take a look as well. If we and our AI agree, we’ll issue a 1984 event report for that content. This solves two problems, one it puts a human in the loop for being able to check / approve / remove reports from Tagr. Secondly it lets Nostr users submit a report to a third party in cases where they don’t want to be associated with the report. Often somebody who’s subject to harassment doesn’t want to label their harasser because it only provokes them. This provides a system of asking Tagr to look at it and make a label if it’s appropriate. I’m sure there are folks who will hate the existence of content labels and reports. The report part is required by Apple and google. The content labeling using a Web of Trust is how we can make Nostr work as a permissionless decentralized network that’s both free and also safe for many kinds of people and communities. If we don’t figure this out then most people will retreat to centralized platforms. Even 4chan has mods. ;-D You can read more about Tagr here: And the full release announcement is here: View quoted note →

Replies (25)

It sends an encrypted event to relays for tagr which can decrypt it with its nsec. It’s the same way encrypted dms work but just a different event type.
someone's avatar
someone 1 year ago
are the 1984 events public, go to relay.nos.social only and published by tagr ?
"1984" events. Reported directly to the Ministry of Truth. Thoughtcrime and Doublethink for your daily chocolate squares.
rabble's avatar rabble
We’re excited to release a new version of @nos with a bunch of bug fixes and the biggest thing is a reworking of the much maligned @Reportinator with a new generation bot we’re calling @Tagr-bot. Users of Nos or anybody can now send an encrypted report to Tagr and it’ll read it, check the content against a moderation AI and also ask the Nos team to take a look as well. If we and our AI agree, we’ll issue a 1984 event report for that content. This solves two problems, one it puts a human in the loop for being able to check / approve / remove reports from Tagr. Secondly it lets Nostr users submit a report to a third party in cases where they don’t want to be associated with the report. Often somebody who’s subject to harassment doesn’t want to label their harasser because it only provokes them. This provides a system of asking Tagr to look at it and make a label if it’s appropriate. I’m sure there are folks who will hate the existence of content labels and reports. The report part is required by Apple and google. The content labeling using a Web of Trust is how we can make Nostr work as a permissionless decentralized network that’s both free and also safe for many kinds of people and communities. If we don’t figure this out then most people will retreat to centralized platforms. Even 4chan has mods. ;-D You can read more about Tagr here: And the full release announcement is here: View quoted note →
View quoted note →
It seems Nos works great if you use half a dozen relays or less and follow less than 500 people. That’s most users. But we are working on scaling both on the network side and performance with bigger social graphs.
I see nothing wrong with that setup. Imagine if we tried to connect with those 500 as opposed to just having a high follower count built purely on metrics. 500 people is enough for a wisp mesh net bitcoin ecosystem with nos as a profile never understood why people need so many followers.
Is it decentralized if someone from your team is verifying? I am one of those folks who hates labels and reports after losing my business and social network because I wasn't vaccinated. But is this also only available on nos? I am using damus so will my posts also receive your subjective labels and reports?
Terrible idea. If someone spams 1984 reports on all your posts, you'll end up paying a lot on the OpenAI API credits. Also, the other relays might not be interested on your moderation nor the values it represents, so you should keep that on your own relay.
It's just a nip-17 private message with a predefined json structure that the bot can understand: reporterText: just a free form human readable description on why you think it should be analyzed. Useful for potential manual processing. reportedEvent: being the full event you are flagging. May be only analyzed by AI, if AI doesn't flag it, there's a manual inspection through a Slack channel integration. image
There's more detail in the bot code itself But keep in mind that there's also a google pubsub side of it that deals with openAI (maybe we should merge both in the future). The google pubsub side is I'd be happy to tell you more about the specifics if you want some more info
I read the architecture diagram from reportinator, I assume tagr just sits on top of that. The missing piece was how to integrate it, thanks for explaining!
The reason we didn't use kind 1984 directly is that the semantics are not quite the same. This is a request to the bot for flagging something, so you point to the note or npub and provide a suggestion explaining why. This is because: We want our own vocabulary, NIP-69 (https://github.com/nostr-protocol/nips/pull/457), which is also friendly to automated reports. This is evolving, so it’s not marked as final. There is a high chance we will choose something different from what is suggested, either from OpenAI's evaluation or our own manual selection. This is also a hint: we are not receiving a report kind; we are receiving a request for a report with different semantics than kind 1984. Allowing user-provided inputs and just relaying them blindly for the anonymity functionality would make the Tagr bot appear to have a schizophrenic "personality." For example, both the victim of harassment and the harasser could use it, and from an outside perspective, the bot would appear to ban everything randomly without a consistent set of rules. The idea is to have many bots tailored to different preferences, and you follow those you like. But I like the idea of a new kind. We are using plain JSON just because it was the more flexible initial approach while developing the service. Right now, we do the communication through NIP-17, so it's a kind 1059 wrapping a kind 13 wrapping a kind 14 with our custom JSON in the content. I think what you propose is to have a new kind different from kind 14, right? Let's say kind 1986 "Report Request"? Currently, I like that we are fully wrapping the reported note inside the payload, so we don't need to fetch anything, but it’s not a big deal to have it as an e-tag for more consistency, although that increases the chances of not finding it. So we'd be using NIP-59 wrapping this new kind. It would not be signed for the same reasons kind 14 is not, and also because all the requester identity comes from the kind 13, and we'd be repeating info.