"Cambridge Analytica wanted to radicalize society"

in #science7 years ago

860x860.jpg

The Cambridge Analytica case also raises the question of how to spread false announcements on Facebook so rapidly. Digital expert Ben Scott has analyzed just that. He noticed that some information got a boost, even though they had hardly been noticed before - for some inexplicable reasons. Scott, who works for the think tank "Stiftung Neue Verantwortung" in Berlin, suspects that this inexplicable push is related to the advertising model of Facebook.

So he examined this model, together with his colleague Dipayan Ghosh. Both Scott and Ghosh used to work in the US government. Scott among other things as a digital consultant for then-Secretary of State Hillary Clinton, Ghosh then went to Facebook. In an interview, Scott explains how social networks are trying to divide society and what Facebook has to do now to prevent a new data scandal.

ME: Mr. Scott, what is the central insight of your analysis?

Ben Scott: The basic problem we have in the advertising industry is that it collects behavioral data on a massive scale and that this data can be misused for political advertising campaigns.

In what way?

For advertisers, it makes no difference whether they want to sell shoes or racism. The structure is unchanged: The collected data is used to group users into groups. These groups are neatly separated, according to political views, education and similar characteristics. This system helps advertisers: they can make people do things they would not actually do.

But this is how it works, roughly simplified, but every kind of advertisement.

The difference between social networks and traditional media lies in the targeting, that is the targeted response of specific groups. Classical advertising is aimed at everyone, at least a wider audience. And because you do not know who you want to achieve and do not want to bother anyone, you are more cautious. The advertising on social networks is divided into 100 sub-groups and none of them will notice what the others see.

How easy is this system to exploit?

You do not have to be very smart anyway. Facebook has invested a lot of money in algorithms that should perfect the targeted approach. They make it easy for the advertisers. In the past, you had to know exactly who you wanted to address and communicate that when you put an ad campaign on Facebook. Today it is sufficient to give rough lines and Facebook fills in the gaps.

640x360.jpg

For example, Facebook looks at whether campaigns from different groups have similarities.

For example. Facebook aims to keep people on the platform for as long as possible. In other words, if the advertising you are switching is successful and gets people to interact with it, it will be rewarded. On the one hand you get more people and on the other hand, you have to pay less for the next advertising, as a reward.

What is the consequence?

Content that divides people gets more reach. Content that entertains or polarizes as well. As a result, such contributions are increasingly being promoted - so that people continue to stay on the page.

What this can lead to, we see in the case of Cambridge Analytica. The company has come up with a way to get data that they can use to manipulate people and play off parts of society against each other.

Many people doubt that the psycho games of Cambridge Analytica were successful.

Cambridge Analytica exploits what exists in the society of division. So you are not alone responsible for influencing political opinion.

Society in the US is deeply divided, and not just since Cambridge Analytica. Does it help to demonize the company?

It is true that society in the US has been polarized for many years. This is especially evident in the discussion about cultural identities. Social networks contribute to it. They sort people into groups. People who have similar views. With dissenters, you only come in contact, if you want to yell.

That is very simplistic.

But mostly accurate. To put it figuratively: Cambridge Analytica has seen that the door is already open a crack and has decided to push it wide. Cambridge Analytica wanted to radicalize society.

You can not just blame Cambridge Analytica and Facebook. Also, the internet is not to blame. But technologies and business and advertising models have been developed that help to make things worse.

Facebook has been accused of years of stalling too much. Now they are accused of giving too much data. No matter how they do it seems wrong.

That cannot be dismissed out of hand.

If Facebook would call you and ask for tips, what would be your answer?

There are two ways to avert damage without fundamentally changing the underlying business model. First, political advertising must be clearly marked. Who has paid for them, how many people have been reached, which advertisements are still being switched, that must be transparent? Then users would know that they are getting an ad because they are, for example, whites around the age of 50 and with a university degree in a wealthy neighborhood.

The second way would be: Facebook forbids collecting data that can be misused in politically sensitive areas. Or - if they do not want to - then they forbid that such data be used for advertising in politically sensitive areas.

Should Facebook delete the data in case of doubt?

Facebook should rather restrict access to the data. Not for normal, but definitely for political advertising.

Coin Marketplace

STEEM 0.22
TRX 0.27
JST 0.041
BTC 104276.64
ETH 3847.01
SBD 3.34