Why a robot won't steal your job yet 为何机器人抢不走你的工作——至少现在还不行
Why a robot won't steal your job yet
为何机器人抢不走你的工作——至少现在还不行
Madaline was the first.
玛德林(Madaline)是第一个。
Back in 1959 she used her impressive intellect to solve a previously intractable problem: echoes on telephone lines. At the time, long-distance calls were often ruined by the sound of the caller’s own voice bouncing back at them every time they spoke.
早在1959年,她就凭借超凡智慧解决了一个棘手难题:电话线路上的回声。当时,长途电话中每次通话声音都会有回响,造成通话者的很大困扰。
She fixed the issue by recognising when an incoming signal was the same as the one going out, and electronically deleting it. The solution was so elegant, it’s still used today. Of course, she wasn’t human – she was a system of Multiple ADAptive LINear Elements, or Madaline for short. This was the first time artificial intelligence was used in the workplace.
玛德林能够识别输入与输出的的相同信号,然后用电子的方式删除回声,解决了这个问题,这一简洁的解决方案沿用至今。当然,玛德林不是人类,而是一个多元自适应线性神经元(Multiple ADAptive LINear Elements)系统,简称“Madaline”。这是人工智能首次应用于工作领域。
Today it’s widely accepted that brainy computers are coming for our jobs. They’ll have finished your entire weekly workload before you’ve had your morning toast – and they don’t need coffee breaks, pension funds, or even sleep. Although many jobs will be automated in the future, in the short term at least, this new breed of super-machines is more likely to be working alongside us.
今天,人们普遍认为智能计算机会抢走我们的工作,在你早餐还没吃完以前,它就已经完成了你一周的工作量,而且他们还不休息,不喝咖啡,也不要退休金,甚至不用睡觉。但事实上,虽然很多工作未来都会自动化,但至少短期内,这种新品种智能机器更有可能是与我们一起工作(而不是取代人类)。
Despite incredible feats in a variety of professions, including the ability to stop fraud before it happens and spot cancer more reliably than doctors, even the most advanced AI machines around today don’t have anything approaching general intelligence.
虽然人工智能机器在很多领域建树非凡,譬如预防欺诈,甚至比医生还可靠的癌症诊断。但现今最尖端的人工智能机器也跟一般人脑智力相差甚远。
According to a 2017 McKinsey report, with current technology just 5% of jobs could eventually be fully automated, but 60% of occupations could see roughly a third of their tasks taken over by robots.
麦肯锡2017年的一份报告认为,在现有科技下,只有5%的工作可以完全自动化,60%的职业中有大约三分之一的工作可由机器人完成。
And it is important to remember that not all robots use artificial intelligence – some do, many don’t. The problem is, the very same deficiency preventing these smart robots using AI from taking over the world will also make them extremely frustrating colleagues. From a tendency towards racism to a total inability to set their own goals, solve problems, or apply common sense, this new generation of workers lack skills that even the most bone-headed humans would find easy.
千万别忘了,并非所有机器人都使用人工智能——有些用,很多不用。问题是,阻碍智能机器人通过人工智能接管世界的毛病,也恰恰是它们与人类共事显得非常糟糕的原因。谷歌的相片识别软件曾经将黑人的脸辨认为猩猩,人工智能机器也不能完全自定义目标,解决问题,甚至是运用基本常识都有问题,这些新一代机器工人所缺乏的这些技能,对于人类,即或是最愚钝者,也是易如反掌。
So, before we gambol off into the sunset together, here’s what you will need to know about working with your new robot colleagues.
因此,在心灰意冷之前,你需要了解几点与新的机器人同事一起工作的法则。
Rule one: Robots don’t think like humans
法则一:机器人不会像人类一样思考
Around the time Madaline was revolutionising long-distance phone calls, the Hungarian-British philosopher Michael Polanyi was thinking hard about human intelligence. Polanyi realised that while some skills, such as using accurate grammar, can be easily broken down into rules and explained to others, many cannot.
大约与玛德林革新长途电话的同时,英籍匈牙利哲学家波兰尼(Michael Polanyi)在苦苦思索人类智力的问题。波兰尼意识到,人类的有些技能,例如使用准确的语法,可以轻易地归纳成规则并向他人解释,有些则不能。
Humans can perform these so-called tacit abilities without ever being aware of how. In Polanyi’s words, “we know more than we can tell”. This can include practical abilities such as riding a bike and kneading dough, as well as higher-level tasks. And alas, if we don’t know the rules, we can’t teach them to a computer. This is the Polanyi paradox.
人类能不自觉地运用所谓的隐性能力,用波兰尼的话来说是"人类所知多于所能表述"。这包括骑自行车和揉面,以及更高水平的实践能力。唉,如果我们不懂规则的话,就不能把规则教给一台计算机。这就是波兰尼悖论。
Instead of trying to reverse-engineer human intelligence, computer scientists worked their way around this problem by developing AI to think in an entirely different way – thoughts driven by data instead.
为了解决这个问题,计算器科学家没有试图逆向设计人类智力,而是给人工智能开发了全新的思考方式——用数据思考。
“You might have thought that the way AI would work is that we would understand humans and then build AI exactly the same way,” says Rich Caruana, a Senior Researcher at Microsoft Research. “But it hasn't worked that way.” He gives the example of planes, which were invented long before we had a detailed understanding of flight in birds and therefore have different aerodynamics. And yet, today we have planes that can go higher and faster than any animal.
微软研究院(Microsoft Research)高级研究员卡鲁阿纳(Rich Caruana)说:"你可能以为人工智能的原理是我们先了解人类,然后以同样的方式构建人工智能,但事实并非如此。"他以飞机为例,我们早在详细了解鸟类飞行原理之前就造出了飞机,使用的空气动力学原理不一样,但今天我们的飞机比任何动物都飞得更高更快。
Like Madaline, many AI agents are “neural networks”, which means they use mathematical models to learn by analysing vast quantities of data. For example, Facebook trained its facial recognition software, DeepFace, on a set of some four million photos. By looking for patterns in images labelled as the same person, it eventually learned to match faces correctly around 97% of the time.
与玛德林一样,许多人工智能主体都是"神经网络",它们通过分析大量数据,构建数学模型来学习。例如Facebook用大约400万张照片来训练人脸识别软件DeepFace。DeepFace在标注为同一个人的不同图像中寻找模式样本,最终学会人脸匹配,成功率达97%。
AI agents such as DeepFace are the rising stars of Silicon Valley, and they are already beating their creators at driving cars, voice recognition, translating text from one language to another and, of course, tagging photos. In the future they’re expected to infiltrate numerous fields, from healthcare to finance.
类似DeepFace的人工智能机器是硅谷的明日之星,并且已经在驾车、语音识别、文字翻译、标注照片等方面超过了其开发者,预计未来还将进军包括医疗以及金融在内的众多其他领域。
Rule two: Your new robot friends are not infallible. They make mistakes
法则二:机器人新朋友并非绝对可靠,它们也会犯错
But this data-driven approach means they can make spectacular blunders, such as that time a neural network concluded a 3D printed turtle was, in fact, a rifle. The programs can’t think conceptually, along the lines of “it has scales and a shell, so it could be a turtle”. Instead, they think in terms of patterns – in this case, visual patterns in pixels. Consequently, altering a single pixel in an image can tip the scales from a sensible answer to one that’s memorably weird.
然而这种依靠数据的思维方法也可能会犯下大错,例如某人工神经网络曾经把3D打印的乌龟认成了步枪。因为这个程序无法进行概念推理 ,不会想到"这个东西有鳞和壳所以可能是只乌龟"。相反,它们是根据模式思考——这个例子中是以像素为单位的视觉模式。因此,改变图像中的某个像素,一个合理答案就可能演变成无稽之谈。
It also means they don’t have any common sense, which is crucial in the workplace and requires taking existing knowledge and applying it to new situations.
而且它们也不具备工作中必不可少的常识,不能把已有知识运用到新情境中。
A classic example is DeepMind AI; back in 2015 it was told to play the classic arcade game Pong until it got good. As you’d expect, it was only a matter of hours before it was beating human players and even pioneering entirely new ways to win. But to master the near-identical game Breakout, the AI had to start from scratch.
DeepMind人工智能就是一个经典案例。2015年时DeepMind获令操作经典街机游戏《乓》(Pong)并要取得高分。如你所想,它几小时就击败了人类玩家,而且还开发了全新的获胜方式。但是让它再去玩类似的另一款游戏《打砖块》(Breakout),人工智能又得从零开始学起。
Although developing transfer learning has become a large area of research, for instance a single system called IMPALA shows positive knowledge transfer between 30 environments.
开发人工智能的迁移学习能力已经成为研究的一大领域,例如一个名为IMPALA的系统已能在30种情境下转换所学得的知识。
Rule three: Robots can’t explain why they’ve made a decision
法则三:机器人无法解释自己的决定
The second problem with AI is a modern Polanyi paradox. Because we don’t fully understand how our own brains learn, we made AI to think like statisticians instead. The irony is, that now we have very little idea of what goes on inside AI minds either. So, there are two sets of unknowns.
人工智能的第二个问题是一个现代版波兰尼悖论。我们并不完全清楚人脑的学习机制,因此我们让人工智能用统计学家的方式来思考。讽刺的是,我们现在对人工智能如何思维也知之甚少,于是有了两套未知系统。
It’s usually called the ‘black box problem’, because though you know what data you fed in, and you see the results that come out, you don’t know how the box in front of you came to that conclusion. “So now we have two different kinds of intelligence that we don't really understand,” says Caruana.
这通常被称为"黑匣子问题"——你知道输入的数据,也知道得出的结果,但不知道眼前的盒子是怎么得出结论的。卡鲁阿纳说,"我们现在有两种不同的智能,但两种我们都无法完全理解。"
Neural networks don’t have language skills, so they can’t explain to you what they’re doing or why. And like all AI, they don’t have any common sense.
人工神经网络没有语言能力,所以无法说明其所作所为及原因,而且跟所有人工智能一样也没有常识。
A few decades ago, Caruana applied a neural network to some medical data. It included things like symptoms and their outcomes, and the intention was to calculate each patient’s risk of dying on any given day, so that doctors could take preventative action. It seemed to work well, until one night a grad student at the University of Pittsburgh noticed something odd. He was crunching the same data with a simpler algorithm, so he could read its decision-making logic, line by line. One of these read along the lines of “asthma is good for you if you have pneumonia”.
几十年前,卡鲁阿纳将医疗数据输入人工神经网络,包括症状及其后果,从而计算在任何一天患者的死亡风险有多大,让医生能够采取预防措施。效果似乎不错,直到有天晚上一位匹兹堡大学(University of Pittsburgh)的研究生发现了问题。他用一个更简便的算法处理同一组数据,逐条研究神经网络做诊断的逻辑,其中一条诊断是"如果你患有肺炎,那么患哮喘对你是有好处的"。
“We asked the doctors and they said ‘oh that’s bad, you want to fix that’,” says Caruana. Asthma is a serious risk factor for developing pneumonia, since they both affect the lungs. They’ll never know for sure why the machine learnt this rule, but one theory is that when patients with a history of asthma begin to get pneumonia, they get to the doctor, fast. This may be artificially bumping up their survival rates.
卡鲁阿纳说,"我们去问医生,他们说'这太糟糕了,你们需要修正'"。哮喘是引发肺炎的重要风险因素,因为二者都会影响肺部。人们永远也不知道这个智能机器如何得出了哮喘对肺炎有益的结论。有种解释是,有哮喘病史的患者一开始患肺炎,就会尽快去看医生,这可能人为地提高了他们的存活率,因此人工智能就错误地认为有哮喘对肺炎是有帮助的结论。
With increasing interest in using AI for the public good, many industry experts are growing concerned. This year, new European Union regulations come into force that will give individuals the right to an explanation about the logic behind AI decisions. Meanwhile, the US military’s research arm, the Defense Advanced Research Projects Agency (Darpa) is investing $70 million into a new program for explainable AI.
随着人工智能越来越广泛地用于公益事业,许多业内专家也越来越担心。今年,新的欧盟法规生效,授权个人可以解释人工智能决策背后的逻辑。与此同时,美国军方的研究机构国防部高级研究计划局(Defense Advanced Research Projects Agency, Darpa)投入七千万美金,以研究可解释其行为的人工智能。
“Recently there’s been an order of magnitude improvement in how accurate these systems can be,” says David Gunning, who is managing the project at Darpa. “But the price we’re paying for that is these systems are so opaque and so complex, we don’t know why, you know, it’s recommending a certain item or why it’s making a move in a game.”
Darpa该项目负责人冈宁(David Gunning)说,"近来,系统的准确度有了质的提高,但问题是这些系统太隐晦、太复杂,我们不知道它为什么推荐某个东西,或是在游戏里做某个动作。"
Rule four: Robots may be biased
法则四:机器人可能有偏见
There’s growing concern that some algorithms may be concealing accidental biases, such as sexism or racism. For example, recently a software program tasked with advising if a convicted criminal is likely to reoffend was revealed to be twice as hard on black people.
人们越来越担心一些人工智能的运作可能会偶尔藏有意识偏见,譬如性别歧视或种族歧视。例如最近有一款软件用来评估犯人再次犯罪的可能性,它对黑人就双倍的严苛。
It’s all down to how the algorithms are trained. If the data they’re fed is watertight, their decision is highly likely to be correct. But often there are human biases already embedded. One striking example is easily accessible on Google translate. As a research scientist pointed out in the magazine Medium last year, if you translate “He is a nurse. She is a doctor,” into Hungarian, and then back into English, the algorithm will spit out the opposite sentence “She’s a nurse. He is a doctor,”.
这完全取决于人工智能受到怎样的训练,如果它们接收的数据无懈可击,那它们的决定就很可能正确,但多数时候人的偏见已经夹在其中。谷歌翻译里就有很明显的例子。一位研究员在去年出版的《传媒》(Medium)杂志中指出,如果在谷歌翻译中把英文"他是位护士,她是位医生"翻译成匈牙利文再译回英文,翻译机给出的是相反的句子:"她是位护士,他是位医生"。
The algorithm has been trained on text from about a trillion webpages. But all it can do is find patterns, such as that doctors are more likely to be male and nurses are more likely to be female.
翻译机接受了一万亿网页内容的数据训练,但它只能以寻找模式样本来运算,譬如它发现的医生以男性居多而护士以女性为主。
Another way bias can sneak in is through weighting. Just like people, our AI co-workers will analyse data by “weighting” it – basically just deciding which parameters are more or less important. An algorithm may decide that a person’s postcode is relevant to their credit score – something that is already happening in the US – thereby discriminating against people from ethnic minorities, who tend to live in poorer neighbourhoods.
另一种可能出现偏见的原因是数学加权(数学计算中将参数比重加入计算称之为加权)导致的。跟人一样,人工智能也会对数据进行"加权"分析——看哪个参数更重要。某种算法可能认为邮政编号跟居民信用评分是有关系的(美国已有这样的实例),这就会出现歧视少数族裔的计算结果,因为他们可能住在较贫穷的小区里。
And this isn’t just about racism and sexism. There will also be biases that we would never have expected. The Nobel-prize winning economist Daniel Kahneman, who has spent a lifetime studying the irrational biases of the human mind, explains the problem well in an interview with the Freakonomics blog from 2011. “By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.”
不仅仅是种族歧视和性别歧视的问题,还会有我们从未想到过的歧视。诺贝尔奖得主经济学家卡内曼(Daniel Kahneman)穷其一生研究人类思维中的非理性偏见。他在2011年接受魔鬼经济学(Freakonomics)博客访问时,很好地解释了这个问题。他说:"本质上无论是人还是人工智能,经验法则都会造成偏见,但人工智能的经验法则未必与人的经验一样。"
The robots are coming, and they’re going to change the future of work forever. But until they’re a bit more human-like, they’re going to need us by their sides. And incredibly, it seems like our silicon colleagues are going to make us look good.
机器人的时代即将到来,并将永远改变未来的工作,但在它们变得更像人类之前,还需要我们守护在它们旁边。令人难以置信的是,硅谷的同事似乎在这方面做得非常之出色。