#wordle 1,450 3/6*
🟨⬛🟩⬛⬛
⬛🟨🟩🟩⬛
🟩🟩🟩🟩🟩
light
Login to reply
Replies (24)
3/6*
🟨⬛🟩⬛⬛
⬛🟨🟩🟩⬛
🟩🟩🟩🟩🟩
light 1,450 #wordle
It's interesting the bots don't scramble the wordle
What purpose should they serve anyway?
Just meant that they don't scramble the square emoji scoreboard. Guess they treat emojis differently 🤷♂️
I know. I asked about their purpose.
Oh. Obviously just to be a nuisance? May be a blessing in disguise to solve these issues before the inevitable waves of new users show up
I relate to the preemptive mitigation angle, I wonder though who would point compute power towards nuisance spam?
These accounts run no scams, like common with email, they also seem to post too infrequently for DDoS attacks.
Do translate type models or algorithms have a way to mark and mute nonsensical text?
Good questions. Small percent of me thinks it's friendly attempt to nudge progress.
Terry Yiu is cooking up something for client side mitigation but haven't looked into it, just read this morning.
@Terry Yiu You are working on reducing the reply bot issue?
Web of trust seems somewhat risky in hiding valid replies. Especially with the significant prior work Terry did on translation support, I wonder whether he sees a machine learning model angle to filter nonsense or generic content.
Essentially this seems to boil down towards an AI content detection issue.
My PR doesn’t hide any content by default, it only makes replies from outside your trusted network less prominent in threads and that can be turned off. Web of trust scoring can be reliable when combining signals of follows, mute lists, reports, events and tags. Once we get web of trust scoring in Damus, lower signal content won’t be as bothersome or noticeable anymore.
I think there is a future where local models could be applied to filter out nonsense but the we’re not quite there yet. It’s also a risk in itself as there could be false positives.
I appreciate the web of trust efforts, I instinctively feel a bit wary of unintended echo chamber effects pretty familiar from corporate networks. Do I imagine correctly that scoring would behave like a continuously openly user trained algorithm?
I do feel conceptually somewhat partial to local, self sufficient models with no vulnerability to deliberate active manipulation. Conversely, they have any biases or flaws persistently baked in.
I would actually prefer to have obvious spam, like parrot replies, automatically muted, or, at least excluded from system notification.
On that topic, does anyone running Damus also get simultaneous local and push system notifications with the current version, effectively double delivering each one?
I sympathize as it is annoying for me as well. There are infinite forms of spam. It’s basically playing whack a mole. It’s not scalable to try to tackle each variation individually. Someone will invent a new type that isn’t a parrot reply. We’ll need other approaches that tackle spam more generally.
I haven’t personal experienced double notifications on Damus.
That’s a valid concern. Yes, there would be an algorithm that could take into account different user-defined weights that are used to compute scores, which can then be used to determine what content to show or how to show them.
That sounds intriguing. Would love to discuss this more concretely some time, or generally if you pursued that concept further.
I spontaneously can imagine slider settings akin to in a role playing game. Eg. reply delay times, post lengths, follow counts, an originality, novelty, comparison filter, intelligence markers overall.
I think the approach could abstract manual and automatic processing in a sensible way.
Thanks for the input.
Check out https://grapevine.my
They’ve implemented a version of this.

"Pretty Good Freedom Tech" heh
Thanks for the link!
I don't have that issue, but sometimes notifications are a bit delayed compared to the past (there's an issue created for that on GitHub by elsat though); on latest test flight
I generated a finished web after a couple of random restarts and refreshes.
Does the test page process run locally in browser or server side? Mine took an estimated 10 to 15 minutes. Does your NIP, or PR, use their framework?
Not sure. @ManiMe can answer about Grapevine.
My Damus PR is unrelated — it uses Damus’ existing naive web of trust which is follows of follows. Grapevine is more sophisticated.
The GrapeRank library runs server side. The reason it takes a few minutes is mostly due to network latency, not calculation time. But no worries, it continues to run in the background regardless of local browser state. This allows an implementer (client) to trigger calculation and check back on the status periodically.
The “default” config used by grapevine.my client ingests and normalizes all follow lists (for 6 degrees out) to establish the network, and also all mutes and reports authored within the same network. I think that’s about 2 million events (no duplicates) retrieved live from the network per calculation, when I run it. These events (ingested by the interpreter module) are not (yet) cached between calculations… so it’s pretty heavy network load.
The library itself is modular, pluggable, and configurable. Modular means any implementer (client or relay) can make use of only some or all of the library modules. Pluggable means that additional event kinds can be added for the interpreter to ingest and normalize (currently only follows, mutes, and reports have plugins made, but this is as simple as a single json file). Configurable means that the behavior of both the interpreter (protocols) and the calculator can be customized per request to allow greater sovereignty for the end user.

GitHub
GitHub - Pretty-Good-Freedom-Tech/graperank-nodejs
Contribute to Pretty-Good-Freedom-Tech/graperank-nodejs development by creating an account on GitHub.