I appreciate the web of trust efforts, I instinctively feel a bit wary of unintended echo chamber effects pretty familiar from corporate networks. Do I imagine correctly that scoring would behave like a continuously openly user trained algorithm?
I do feel conceptually somewhat partial to local, self sufficient models with no vulnerability to deliberate active manipulation. Conversely, they have any biases or flaws persistently baked in.
Login to reply
Replies (1)
That’s a valid concern. Yes, there would be an algorithm that could take into account different user-defined weights that are used to compute scores, which can then be used to determine what content to show or how to show them.