We’re excited to release a new version of @nos with a bunch of bug fixes and the biggest thing is a reworking of the much maligned @Reportinator with a new generation bot we’re calling @Tagr-bot.
Users of Nos or anybody can now send an encrypted report to Tagr and it’ll read it, check the content against a moderation AI and also ask the Nos team to take a look as well. If we and our AI agree, we’ll issue a 1984 event report for that content. This solves two problems, one it puts a human in the loop for being able to check / approve / remove reports from Tagr. Secondly it lets Nostr users submit a report to a third party in cases where they don’t want to be associated with the report. Often somebody who’s subject to harassment doesn’t want to label their harasser because it only provokes them. This provides a system of asking Tagr to look at it and make a label if it’s appropriate.
I’m sure there are folks who will hate the existence of content labels and reports. The report part is required by Apple and google. The content labeling using a Web of Trust is how we can make Nostr work as a permissionless decentralized network that’s both free and also safe for many kinds of people and communities. If we don’t figure this out then most people will retreat to centralized platforms. Even 4chan has mods. ;-D
You can read more about Tagr here:
And the full release announcement is here: View quoted note →

Tagr Bot
Tagr bot is a Nostr text-only moderation bot that flags unwanted content in your mentions and replies. Follow the bot to hide unwanted content in y...
