0000000000000111 - Proposal to Change Focus
I've recently been discussing an idea that @remlaps has about how to gamify the battle against abuse on Steemit. Where users can identify an act of plagiarism, spam or other form of abuse on Steemit and submit it into a "system". Other users are then presented with the "case" and use their judgement to determine whether something is plagiarised or not.
One of the most significant challenges that @endingplagiarism faced was sifting through the 100+ "mentions" received each day and checking that something untoward had happened. This could have been anything from a line of text that vaguely resembled another website to a full-on copy and paste. Hours of effort checking other people's subjective opinion.
This idea solves that problem.
There were many thoughts shared between us, including a number of challenges that would be faced and below are probably the key difficulties. I'd be interested in community feedback and whether you think that I should prioritise this over the reskin project I have already started working on. The solution could easily merge into a site redesign with a working "Report Post" button in the future.
Challenges
How to distribute awards anonymously - The idea of anonymity when reporting abuse is important to avoid any form of retribution. Using beneficiaries, account transfers or upvotes from an "anti-plagiarism related" account could draw attention and potentially downvotes. The best solution I could think of was to piggy back on the latest steemcurator04-08 initiative and to have upvotes blended in to this process. Disguising the reason for an upvote within many other upvotes (This then posed the challenge of users playing who don't write content themselves although they could potentially receive rewards via a beneficiary or transfer as you can't downvote somebody who doesn't write content.). I believe we'd need Steem Team agreement to go down this route.
The Whitelist - I've come across plenty of accounts where users post content on another blog or their own website. This is easily validated by a human but automation of something like this would be more difficult and if not done sensitively could cause good users to leave.
This feels to me like the kind of website (or shall was call it a dApp?) that I can write with 3 core components:
1. Submission of Abuse
The user can log in (I want to validate that it's a Steemit user submitting the abuse in order to reward them (I don't know how yet) and the validation will hopefully curb spamming of the site) and submit:
- the Steemit URL that's abusing (comment, post or profile),
- the type of abuse (plagiarism, spam, etc.),
- the source (if plagiarism) and any additional details.
This is stored in a database (the login credentials are never saved and the Posting Key will only be accepted).
2. Review of the Abuse
The users playing the game will also need to log in (for the same reasons as above and to prevent multiple votes on an item) and will be presented with the "abuse case". They will have 3 options to choose from:
- Vote Abuse - if they believe it to be abuse
- Undecided - if they're undecided whether it's abuse or not
- Vote Non-Abuse - if they don't believe it's abuse.
The idea is that each "player" has a rating which affects the vote in a certain way. For example, a player with reputation 25 (out of 100) who votes "abuse" will increase that post's "abuse score" by 0.25 whereas a player with reputation 50 will increase it by 0.5. A player's reputation can increase or decrease based upon the outcome of each post along with some "test" cases that are known to be plagiarised or not - i.e. if people blindly vote, their reputation will decrease to 1 and their vote (and therefore rewards) become irrelevant. The reputation and scoring will likely be complex and I've not figured out how it could work yet (but this is my initial thinking) - it will not be linked to Steemit's reputation and will be stored privately to avoid retribution (but could be used to calculate reward distribution).
3. Reporting the Abuse
Once enough users have reviewed a post, it will be included in a daily (or bi-daily) post detailing the current status of content that's been submitted as abusive with its "abuse rating". There will be a threshold for when posts are included (e.g. rating starts at 0 and if it reaches +5, it is included) - a smaller threshold could also be used to trigger a comment on the post to alert the community moderators (if the post is within a community) that a post within their community has been flagged as suspicious. Any replies to this comment will be included in the game (i.e. the post author or community admins / moderators have the opportunity to present a "defence").
My intention is to use the @endingplagiarism or @plagiaristpayout account to share the reports within the Mosquito Squishers community and also to post any necessary messages to suspicious posts (for the reason outlined above).
This could also allow automation of a persistent offenders report as opposed to the manual editing of the "Consolidated List of Plagiarists".
Future Possibilities
As I mentioned before, there's potential to include abuse reporting within the reskin I'm working on to streamline Component 1.
There's also potential to expand this idea to include another @remlaps idea of quorum sensing to downvote posts. Instead of relying upon @ac-cheetah to manually visit a post or any "blind" downvoting, the report could include a link to "register your downvoting interest" which would store your username and posting key and once the necessary threshold is reached, the post will receive a mass of downvotes to avoid individual retribution. Please read @remlaps' post for a better explanation.
I've spent too much time writing and proof-reading this post and I'm grateful to anybody who's reached this point (assuming you kept reading this far).
What Do You Think?
Does this sound like a good and (importantly) fair approach? Does the idea make sense and will it improve the Steemit platform? Would you submit abuse and "play the game"? What are your thoughts on the challenges of anonymity and "whitelisting"? Should I prioritise this over the reskin?
These are a few questions I thought of but please add your own along with any thoughts on the idea and how it could develop going into the future.
So I think the very first thing we should do is to implement any idea that will at least take us a step further in the - probably never-ending - fight against copy-asses, farmer riff-raff, milking machines and other abusers.
This abuse is the first thing that catches your eye when you come across the Steem with your eyes a little open. A nicer front end will not embellish this first impression. We have a functional frontend - it's enough to chase all the shit into the net.
We have to demonstrate - also to the outside world - will and cohesion. Then we will have earned a beautiful frontend at some point... :-))
With the front-end under my control, I can hide the shit 🙂 A bit like the work that Steemit's doing to remove posts upvoted by bots from the Trending page (I wonder when we'll see this live🤔), I can unilaterally decide not to show some users. Or, I could put a big warning signal next to users who abuse the platform so that people can hide their content. Or replace their content with "This post is probably plagiarised" or something else. So many options once I have all of the power. Ha ha ha ha ha.
Aha hier finde ich Dich 😉
Vga
Immer dort, wo was los ist, wo sich was tut... 😉
Ja, ja, der Steem ist mehr als DU - wenn auch nicht viel mehr... 😂
Yes, I read every word of your article.
What do you think?
It is important to see that there are people interested in developing methods that allow a clean ecosystem of plagiarism and spam, I congratulate you for working on it and I encourage you not to give up but to raise more encouragement. (Raise more because your phrase "assuming you have read this far" tells me of the possibility that you have written thinking that some people would not be interested in reading).
Does this seem like a good and (mostly) fair approach?
I think if it solves the problems of manual reviews to subjective complaints, and when the reward is defined many will be motivated to participate. But mainly I see very positive the method of multiple negative votes and anonymity, this way you can report more freely.
Does the idea make sense and will it improve the Steemit platform?
It makes a lot of sense and of course if implemented successfully it will bring a great improvement. Although from my personal point of view what really hurts the economy of the system is the use of voting bots, but this is something that is not considered an abuse here. So it will remain a harmful cancer.
Would you file an abuse and "play the game"?
If I detect any article that meets the conditions to be filed as abuse of course I would file it.
What are your thoughts on the challenges of anonymity and "whitelisting"?
I know very little about programming but I'm sure it represents a tough challenge, especially the part where an author can write their article somewhere else, detecting that it's the same author writing on steemit and that it's not plagiarism, creating an algorithm that does it automatically doesn't sound like an easy task.
This has been a touchy subject ever since I joined and something I've learned to accept won't change. The requirement that a user has to post every day to maximise their return is a recipe for abuse.
I'm glad you liked @remlaps idea and are considering developing it! In regards to anonymity, I was thinking that as long as it is not known who is judging which content, it wouldn't really matter if a certain user is known to be "playing the game" because of transfers or whatever, but I could be wrong. I think the goal is to have so much content in the game given randomly to users playing the game that a user abusing the system wouldn't know who to target.
In regards to development, I think both are important concepts, and I'm hopeful that other front ends than just Steemit will emerge. But it's really your call!
The anonymity's tricky. I've just replied to somebody else with the thought that if I was angry that I'd been caught and had the power to intimidate others, I'd target the players who have received the most rewards with the intention of stopping them from playing. Slowly killing the game for those involved. I might be overthinking it - the people that endingplagiarism caught either ignored it completely or found a new way of scamming or stopped - it was rare that the account got downvoted.
I really liked the idea of the game and I would definitely take part in it. But if you are interested in the opinion of the average user, I would advise you to finish with a new frontend. Developing a game, adjusting the rules will require a lot of time and work. We have been fighting plagiarism for 6 years and will fight for another 100 years, so there is time to create the game.
The new frontend is also an extremely important project. If a certain circle of users knows about the fight against plagiarism, the new frontend will immediately catch the eye of any visitor. It will show that the platform is alive and well, that there are developers who can do a lot.
As for the game, there is still something to think about. Especially over the reward. It is very difficult to come up with a topic every day for an interesting post that can attract votes. Plus, you need to write, design, search for images or make them. All this takes a lot of time and effort. And here is a game where you are offered a post, and you have to conclude whether it is plagiarism. We are all human beings by nature looking for easy ways. In my opinion, a huge number of people will rush to play the game. Will there be enough rewards for everyone? What reward do you expect? Definitely not at your own expense. Therefore, the reward should be given by the steencurator. Are you sure they are interested in this? I think that the priority for the SC01 team is to support as many authors as possible, ie to curate the content. Yes, they support the fight against plagiarism, but it should not require too many resources. Hence the suggestion that the game should be played by a certain circle of verified users.
I don't think the game itself would be very complex based upon the 3 components I mention. Submitting a post into a database would be easy to write. Presenting it to the user would also be straightforward. And generating a report based upon the votes would be fairly easy too. There's a little complexity in creating the player reputation but a simple model where it starts at 25, increases by 1 with each "win", decreases by 5 with each "attempted cheat" and decreases by 1 with each "loss" - and a similar rating system with people who submit posts would be ok to start with. Logins will be the user's Steemit username and Posting Key (with the key not stored anywhere) so authentication will be easy too.
The rewards are the difficult bit which is where I thought that piggy-backing on the steemcurator04-08 initiative could work well (as it would also hide the "winners"). The rewards could work similarly to TipU, bitSports or other "games" where the rewards are linked to the post value (in this case, the reporting posts sharing details of abuse). This would also "incentivise" people to upvote the reports so that the reward pool for playing the game would be higher.
Another other option, is if we can get access to the funds within steem.dao which have been accumulating. A proposal to reward users who fight against plagiarism via this method could be made and if approved, could provide a constant source of funds that can be split amongst players (5 or 10 SBD per day for example).
In this case, the more players the play, the better (in my opinion) as there'd be a stronger consensus of opinion, increasing awareness of abusive users and increased probability of community taking action. Which in the long term should reduce abuse.
I must admit, the implementation of your plan would probably completely solve the issue of plagiarism and fraud on the platform, or rather reduce it to an acceptable level. I think you can combine these two projects into one and you do not need to make a choice.
As for funding, I do not believe that DAO will be open. Alternatively, the fight against plagiarism can be financed through beneficial payments to members of the platform.
Bid bots were mentioned here in the comments. If the Steemit team was against bid bots, they could use DAO funds to lower the price of SBD and increase the price of STEEM. This would make buying votes less profitable, or unprofitable at all. Although this will not affect those who delegate their SP to bid bots. I think the Steemit team is afraid to touch the bots in order not to provoke a massive Power Down and outflow of users, and thus the possible death of the platform. So I also had to come to terms with this phenomenon.
I fully trust your views and advise to make as many changes as possible without undue discussion, because people will always have different opinions.
By the way, did you get information about Steemit.com traffic? We once talked about it.
Same here. Upvoting services like Tipu allow users with little power to get big upvotes - making it harder to stop (until ac-cheetah became so powerful). UpVu presents a different challenge as the delegations to it mean that users can't upvote each other so if everybody did it (and why wouldn't they?) then there's little incentive to engage with other users or post anything at all. You could probably work out what percentage of Steem is currently with voting bots and I wouldn't be surprised if this percentage continues to increase whilst users reinvest their profit.
Thanks. I'm not sure that I trust them 🤣
Sadly not. I asked and my request was ignored. Which makes me wonder if whoever has the login details was in the original Steemit team that left a couple of years ago.
I plan to make a new calculation and compare with the previous one. I am sure that voting bots will increase their dominance.
I hope you include Google Analytics code in your new interface, although it may slow down the page load by a few milliseconds.
Because so many users on Steemit and the like value their anonymity, I think there'll be an unusually high number that use Brave or a browser that prevents tracking of their activity so I'll probably need some fairly decent server logging instead (or as well). I've not really thought about tracking yet so will need to think about what information would be useful to collect and why.
As a community admin and also a mod I think it’s a good idea to fight plagiarism. But I think it’s hold be one at a time 😍. I will prefer that your priories the reskin as to the plagiarism checker as many other users have taken up the responsibility in fighting plagiarism.
Does this mean that for a user who is constantly buying votes and also plagiarising, his votes will be downvoted instantly? Will the new mechanism be able to show to other users the names of other steemians who have submitted a post for scrutiny?
Most definitely I will be interested in playing the game but I think if it is fun and rewarding too many users will want to participate in it.
And because it’s a bit checking will there be a second party check to make sure that the bit is not mistaken?( As you know technology cannot be always trusted as we have seen with other checkers)
The idea is that the community decides what posts should be submitted into the game so it could be plagiarism, spam, bot abuse or anything really. The person who submits "the case" is stored in a database but never presented to the interface so anybody wanting vengeance won't know where to look (hence the challenge with paying rewards).
The idea that the players of "the game" decide whether something is abuse or not is the human element checking that the technology is working - along with the daily report which gets posted with the "strongest" abuse cases.
The downvotes could be linked to the result of each "game" but I think I'd prefer that to be handled elsewhere - e.g. the report could link to the abusive post and allow for the group downvoting.
This might be going in a slightly different direction, but I believe you can encrypt payment messages on the chain, so one approach to anonymizing would be to "ante" an amount of liquid STEEM or SBD along with the message about the post details, then once a final determination is made about a post a bot would split the amount that was bet on it among everyone who voted "correctly". People inclined to vengefully strike out at those who caused the downvote on them would be able to see who was participating in general but not which posts they were lodging opinions about since that would all be part of the encrypted message.
I'm hoping that we can find a way to avoid this. If I was the angry little man that I've got in mind, I'd simply look at the person who'd received the most rewards from the game and focus my attention on them. With the intention of slowly disillusioning the better players until they stop playing.
Hmm. I guess I see what you're saying, but I think it's going to be really hard to completely obscure participation -- it would be hard to have enough noise to obscure the signal of rewards going to individuals doing well.
Although, thinking about it, maybe you could obscure it by depositing the rewards to an exchange and then sometime later "withdrawing" those funds to the intended recipient. Would be hard to tell the difference between that and normal activity.
I think that would work - it would probably have to be done manually and could be done in such a way that it's done monthly... and perhaps every user is in a monthly league table - where only they can see their own rank. The top 3 or 5 or however many receive the rewards.
Looking forward to the new Steem frontend.
Me too 🤔
Thanks for the wave!
VgA👍
My too , Shows us that developers believe in the Steem
VgA🙂
I've wondered for a while now... What's VgA?
Hallo Gorilla
And you have no idea what’s VgA🙄?
Ok Viele grüße Atego
Best regards, Atego or Many greetings Atego
VgA😉
Ah, sehr gut. Ich verstehe jetzt 🙂
Hiiii sir,
I'm giving my diary on steemforbetterlife but no one giving attention on my post.
Please support sir.
@alfazmalek
Well, if nothing else, everyone on the Steem blockchain is learning a little biology with Quorum Sensing 101.;-)
One thing that I would add is that with this model, the person who reports the PAP should also get a reward if the consensus agrees that it is flag-worthy. That didn't come up earlier because I was thinking that the initial candidate PAPs would come from an automated crawler.
As to priorities, I'd say they're both important projects, but my personal opinion is that abuse-reduction is a more pressing problem than another front-end for authors and curators. If the game gets participation, doing it first might help to expand the userbase for eventual adoption of the reskin project.
Biology eh? Right up there with history as my least favourite subjects at school.
I agree and perhaps like the posting reward pool, I think that they should receive the bulk of the rewards.
The prioritisation appears to be split evenly down the middle and I wonder if I can work on them simultaneously.
It also crossed my mind that as the owner of any new front end, I could hide all of the content from users that people are fed up with seeing (whether that's plagiarists, spammers, other abusers) so that the new site is clean and hides all of the crap that people get annoyed and demotivated by. This would be totally subjective with me as the sole arbitrator 🤣
True, and you can also add a revenue stream by letting people pay to have their posts placed in "prime locations" (like @steemchiller does with the upper-right corner of SteemWorld).
Another point in favor of splitting the submitter rewards and the evaluator rewards is that you want to reward the submitter only for posts that wind up being scored as abuse... otherwise, people will just submit everything.
OTOH, for the evaluators, you want to reward them for getting "the right answer", whether that means that it's determined to be abuse or not.
You could tune the ratios as things develop.
This has always been the way of it. There are soo many possibilities of things that could be done, but we don't have the right mix of funding and skills here to move very quickly.
In the end, something is better than nothing, no matter which one you choose to focus on.
Three other tangential thoughts:
I like your thinking. @danmaruschak had a good idea that rewards could go via an exchange and then they'll be completely anonymised - the downside is it would have to be done manually. So I'd probably work on a "monthly league table" approach of some sort which is stored in the background. Users can see their own league position (as an incentive to increase their ranking) but nobody else's.
With PAPs that are simply crappy accusations, once a level of "this is crap" is reached, then that PAP could have a lower priority to appear within the game so fewer people see it. And similarly, if somebody repeatedly posts crap (or the continually choose the "wrong" result), their reputation will be lower which will make it harder for them to climb the league. I think that building in these kind of controls should be straightforward too.
That is a good idea, although the need for manual processing basically means that it can't scale. I think it's possible to code bots to work with the exchanges, but I have no idea how difficult it would be. It's almost definitely not a short-term thing.
I had a similar thought in the past, that exchanges could set up staking and anonymized delegation services, which could be used by abuse fighters to disguise the source of their delegations.
The use of exchanges hadn't occurred to me in this context, though.
Finally, I have some time to read, understand and write you as I needed to understand what and how the things would be working.
First of all, when answering what should be given a priority reskin the front page or to create Anti-abuse system, I would suggest to redesign the front page as it is like business card and some people who saw out Steemit page in the past and maybe will visit in the future and seeing the same appearance will put them off. If it is possible to make the abusers invisible then that could work so long the works around the front page is going.
I also like the idea about the reporting system. In general I do not think to report should get much reward as people will start to report everything in order to get reward. There will be a lot of rubbish and will create a lot of investigation work for maybe in some case nothing. You perhaps know how it is when Police is announcing reward for the "wanted" persons and how the telephones do not stop ringing.
The people who will be reviewers, hope they will have some experience in checking once again before they give their vote. In the case if there are clear evidence of abuse maybe then it should just be treated as abuse without vote for it.
It will be interesting to see how it will work and of course like I always say that should be behind the scene and only the results with proven materials should be published.
Thank you for your engagement, that is tremendous work that require a lot of time and of course "brain" :)
Sorry for taking so long to reply and thanks for taking the time to look (I thought I'd replied to my messages ages ago!)
I think that the mechanic of stopping people submitting everything could work with a "reputation" system where the people who submit "good" plagiarism (i.e. that which gets voted as "plagiarised") have the potential to earn more and similarly, submit enough information so that it's easy for the assessor to reach the "correct" conclusion.
I think though, as you also recommend, that the new front-end will continue to be the priority for the time being. I was thinking the same as you in that I'll hide the users posting crap or posts containing certain tags (e,g, krsuccess) as I see fit, in the hope that "My version" of Steemit is all of the good and hides the bad (because I won't have the power to stop it).
Bugger.