"You are all sheep": bots lead you

in #ai7 years ago

temp.png

Humans make machinery. Humans are not in control of all machinery.

It may have started out innocently enough (Let's make a bot to check for spelling, or for dead links), but as time has passed, bots have slowly begun to take over the internet. They are programmed by us, sure, but they are also faster than us. This, along with the fact that some bots have been programmed to learn, makes them a potentially deadly threat to our species as well as the freedom of information and thought online.

Take this video from Cornell University, from 2011, where they hooked up two Cleverbots and made them talk to each other:

Cleverbot is old. It was made in the 90s. A little over 14 years from its creation, in 2011, it's more sophisticated counterpart scored positively on the Turing test [source]. About 7 years have passed since then. Here's why this is important:

They learn by crowdsourcing

For several years, Cleverbot has learned by crowdsourcing information online. We learn by crowdsourcing too, although much slower.

For those of you familiar with computer science/software design, the first steps of creating a program or library without any previous knowledge can be quite hard/time consuming. Try creating a hello world program in assembly. Not so obvious if you come from a higher language, right? But once the architecture is in place, this becomes much easier. Try creating a program with the same output in C or C++. Much easier. Give Python a shot. Easier still, and you can create some pretty robust stuff that would take you a lot of code to write in C (and a few books worth of code if you were to try it in assembly).

Point being: Cleverbot grew a foundation out of crowdsourcing. This foundation could be used for an even more powerful bot, if coders knew what they were doing, or if they wanted to duplicate and modify Cleverbot's use.

Why does this matter? Because as the foundation is easier to grab a hold of, creating sophisticated bots will become easier as well, even for your average mouth-breather.

Groupthink, politics, and popular opinion

Since 2011, bots have expanded past being used for 1 on 1 chats. They've become easy to program in a rudimentary way (slack bots, reddit bots, steemit bots, etc). Making a bot easy to program is dangerous, because you can use it to spread ideas faster. If the ideas are positive and beneficial for our ethics/society, then that's great.

If they're divisive, negative, and immoral, but catchy, then having them spread is decidedly not great. Con artists aren't able to trick you because their arguments are correct, they're able to convince you because their arguments are tricky.

As bots become more powerful, and easier to program, any idiot with a rudimentary knowledge of cut and paste will be able to create a bot to work towards the following:

  • Spamming a political ideology [Source]
  • Supporting a mega corporation's interests [Remember Twitter Firehose?]
  • Pushing for an oppressive government regime [Source]
  • Suppressing dissenting opinions [Information PDF]
  • Stock manipulation (automated pump and dump groups - think Stock twits, only smarter)
  • Cryptocurrency manipulation (automated pump and dumps)
  • Targeted harassment or protection of an individual or group [Source]
  • Setting up honeypots to drain user time [Source]

And that's just what the social bots can do. We won't even get into web crawlers (spiders), and what they could do with access to speedy information.

Fighting the future

For non-coders, the only way to fight back against this is to be smarterer.

Bots are already attempting to sway your opinion on various forums online (some are benevolent, others are malicious in their intent). The recent French election is a good example of this, wherein 5% of those creating posts for MacronGate were responsible for 40% of the MacronGate tweets [Source]. Bots aren't tied to one political party, either. In the recent presidential election, bots fought on both sides of the primaries. When you read someone's argument online, constantly think about why they are making their argument, what counterarguments could be made (devil's advocate), and what logical fallacies might be in use to trick you into swaying one way. Take a few seconds to think before you upvote anything - you could be upvoting a social engineer.

If you code, there are a few ways that you can try to save the future from automated hell.

Bots fight other bots, depending on their coding. Perhaps you could contribute to the digital arms race by designing systems that catch and occupy bots, or neutralize them so that they do not have a negative effect on the natural ecosystem of a forum/site. This is no easy task, though. Twitter bots attempt to evade detection through sleep-wake cycles and post-limitations that mimic a normal humans. If they're part of botnets, it's easy for one leader to contribute thousands of tweets from multiple accounts, to make an opinion appear more popular. Badly coded examples of this might result in word-for-word mirrored posts by separate accounts, but the more sophisticated bots find their way around this.

My own, personal opinion? We're in trouble unless a lot of smart people decide to do the right thing. Once bots overwhelm the internet, the only way to get away from it all will be to turn off your computer. Otherwise, going online will prettymuch be the same as watching one commercial after the next, with a 30 second break here or there to ingest some original content.

I fear for the future.

Coin Marketplace

STEEM 0.26
TRX 0.26
JST 0.039
BTC 94483.51
ETH 3348.38
USDT 1.00
SBD 3.29