RE: Towards a decentralized, abuse resistance framework for the Steem blockchain
I love the idea in general. It's no doubt a complex model which I'd love to see implemented and functioning.
There are few questions and nonsensical thoughts running in my mind which I'm unable to contain...
A "surveyor" can be any user, right? What about the "analyst"? A group of anonymous credible users?
What if not enough users "play the game" or "survey the posts"? What would be the consensus then?
Would Enforcers wait for the payout day (before closing of curation window) to adjust post rewards?
If this is implemented then each and every post should be reviewed. Otherwise, it wouldn't be fair - some authors getting away with undeserved rewards and some getting their rewards adjusted.
How about simply adding a rating slider to each post? The rater should be kept anonymous and the author shouldn't be notified about every rating activity. Rewards can later on automatically adjust based on the rating value. For this, a suitable algorithm needs to be designed. Although I don't think automatic reward adjustment is even possible without voting and getting posting key authorization.
Good questions. Thanks for the feedback!
Yes, the analyst can also be anyone. The idea is that the analysts can pick and choose the surveyors that provide the best information, and the enforcers can do the same with the analysts. Either role can be anonymous or not, but I'd imagine that most people would prefer to use alt accounts to avoid retaliation on their primary accounts.
Yeah, the whole framework would only be effective if it drew enough participation. But, fortunately, we wouldn't have to build the whole thing at once. We could start with a game for surveyors, just for fun, then when there's enough data out there analysts could start by reporting on the data. Finally, when there's enough information from the analysts, the enforcers could start participating, too. In this fashion, the development work and adoption could be phased in over time.
Probably. I'd imagine that it would make sense to do their downvoting after 6-61/2 days.
That would be ideal, but I don't think it's really necessary. Maybe they could root out the worst abusers first and then that would free up time for other content. We're never going to score every post exactly right, but the important part is to gradually get better and better at it. If near-0 abusive content gets downvoted (as now), that's not fair to the authors who are producing attractive content. IMO, ignoring all abusive content is even more unfair than ignoring just some of it.
This is definitely something that could be tried. The nice thing about it is that once we have the framework and protocol definition, developers can experiment with all sorts of different possibilities.
I agree on these points. Of course, with a public blockchain, it's impossible to keep authors from discovering their ratings if they want to (unless encrypted). But I don't see a reason to design that sort of notification into any of the applications.