'Bots used to bias online political chats' - Have you witnessed one?
If you've been chatting about politics on social media recently, there's a good chance you've been part of a conversation that was manipulated by bots, researchers say.
The Oxford Internet Institute (OII) has studied such discussions related to nine places - US, Russia, Ukraine, Germany, Canada, China, Taiwan, Brazil and Poland - on platforms including Twitter and Facebook.
It claims that in all the elections, political crises and national security-related discussions it looked at, there was not one instance where social media opinion had not been manipulated.
Bots in propaganda
Bots - programs that perform simple, repetitive tasks - are integral to what the OII calls "computational propaganda" - instances of people deliberately distributing misleading information on social media by various means.Bots can communicate with people - retweeting fake news, for example - but they can also exploit social network algorithms to get a topic to trend.They can be fully or only partly automated. A single individual can use them to create the illusion of large-scale consensus. They can also be used to stifle critics by mobbing individuals or swamping hashtags.The methods the OII used for identifying bots in each country study varied.The institute has, however, been criticized in the past for identifying social media accounts as being "bots" whose owners insisted they were nothing of the kind.
'Anyone can launch a bot on Twitter'
Bots are built by authoritarian governments, by corporate consultants who hire out their expertise, or by individuals who have the know-how, says the OII."Because the Twitter API [application programming interface - the means by which one bit of software can talk to another] is open, anyone can launch a bot on Twitter," explained director of research for the project, Samuel Woolley.
While bot and other propagandistic behaviour was specific to the political context of each country, the study also identified several trends.In every country, it said, civil society groups struggled to protect themselves against misinformation campaigns.And in authoritarian countries, it added, social media was one of the key ways the authorities had tried to retain control during political crises.
The frontline of disinformation
Computational propaganda has been particularly prevalent in Ukraine, the research suggests.There had been "significant Russian activity... to manipulate public opinion" the report said, adding that Ukraine had become "the frontline of numerous disinformation campaigns" since 2014.The typical way this worked, it explained, was that a message would be placed in an online news outlet or blog's article.This was possible, it said, "because a large number of Ukrainian online media... publish stories for money".These would then be spread on social media via automated accounts and potentially picked up in turn by "opinion leaders", with large followings of their own.With enough attention, the message would ultimately be picked up by mainstream media, including TV channels.The study provides an example related to the shooting down of Malaysian Airlines flight MH17 in 2014 to illustrate how such campaigns work.
News source: www.bbc.com
I don't know if it is a bot, but I have seen other people post several screen captures of Twitter tweets that were exactly the same. Not a reTweet, but a Tweet with the same text. It could have been political marching orders.
Of course, there was that big scandal of Facebook adjusting timelines to show things that slanted towards Facebook's views. Twitter does it with the Trending items. So, technically, I guess those are bots as well.