This introduces interesting attack vectors that will need to be mitigated. E.g. When a person gets hacked, then that could introduce many "scam" apps to people that followed the person.

Replies (9)

You can mitigate this with external code verifiers: - that get payed to do that 👉 Verifiers DVM's - that pay to do that 👉 early beta access (and then of course we need key rotation asap for way more reasons than just this)
I'm assuming that not all people in your web of trust need to verify the app. So there will be some threshold. E.g. "at least 3 people in your WOT with score over 5 verified app ABC". And then yeah, 2 people could be hacked, especially over longer term. There are definitely ways to mitigate these issues...
The framework that I like to think in is "desirables vs undesirables": - If the user action is desirable, the user should be rewarded sats. - If the users gets value from the content, they should be nudged to voluntarily pay sats for it. - If the action is undesirable it should cost sats.
Maybe? Are you thinking that when a new person verifies the app, the verification has to get interactively signed (with multisig) by other people? Or are you thinking co-signing with the store or some verification system paid for co-verifying?
Even if someone you trust gets compromised in that way the WoT of the scam would be very low. That’s the magic of WoT and how humans already behave