CATTI成绩查询入口 CATTI考试公告栏 CATTI考试报名入口
公众号:高斋CATTI 公众号:高斋外刊双语精读
纽约时报:柔情机器人:人类对机器人动感情
文章来源:未知 发布时间:2019-03-11 17:09 作者:admin 点击:

2018.6.17纽约时报:“柔情”机器人:人类怎么对机器人动了真感情

How humans bond with robot colleagues

“柔情”机器人:人类怎么对机器人动了真感情

If you had visited Taji, Iraq in 2013 – well, you might have seen something peculiar. The site lies an hour north of Baghdad and is home to a US military base, with dusty floors and formidable concrete walls. It is in this brutal environment that, following a lethal explosion, a group of soldiers tenderly remembered their fallen comrade. He just so happened to be a robot.

你去过伊拉克塔吉市(Taji, Iraq)吗?塔吉位于巴格达(Baghdad)以北车程一小时的地方。2013年时,这里发生了件奇事。塔吉当地有一个美军基地,泥地上尘土飞扬,混凝土墙壁坚实厚重。一场致命的爆炸过后,一群士兵在如此残酷的环境下充满柔情地追悼他们牺牲的战友,一个机器人。

To all who knew him, this brave hero was affectionately nicknamed Boomer. He had saved many lives during his service, by going ahead of the team to search for lurking bombs that had been laid by the enemy. At his funeral, Boomer was decorated with two medals, the prestigious Purple Heart and Bronze Star, and his metallic remains were laid to rest with a 21-gun salute.

这名勇敢的英雄被熟知他的人亲切地称为“隆隆”(Bomber)。他总是冲在队伍前面搜寻敌人埋下的炸弹,在役期间拯救了许多生命。在他的葬礼上,隆隆被佩戴上了两枚勋章,一枚是著名的紫心勋章(Purple Heart,是美国军方的荣誉奖章,它标志着勇敢无畏和自我牺牲精神),另一枚则是铜星勋章(Bronze Star Medal,缩写为BSM,是授予美利坚合众国军中个人的美军跨军种通用勋奖,用于表彰“英勇或富有功绩的成绩或服务”)。隆隆的金属残骸也在21响礼炮声中荣誉下葬。

Boomer was a MARCbot, a military robot that looks a bit like a toy truck with a long neck, on which a camera is mounted. They’re relatively affordable for robots – they’re each about $19,000, or £14,000 – and not particularly difficult to replace. And yet, this team of soldiers had bonded with theirs. When he died, they mourned him like they would a beloved pet.

隆隆是多功能灵活遥控机器人(MARCbots),一种军用机器人,长得像个玩具卡车,长长的脖子上装着个照相机。MARCbot价格并不算昂贵,每个约1.9万美元(1.4万英镑),并非难以替代。可是这班士兵与隆隆情谊深厚。隆隆牺牲后,战友们像悼念心爱的宠物一样悼念他。

Fast-forward a few years and this story isn’t as unusual as you might think. In January 2017, workers at CBC, the Canadian Broadcasting Corporation, threw a retirement party for five mail robots. Rasputin, Basher, Move It or Lose It, Maze Mobile and Mom had been pacing the company’s hallways for 25 years – delivering employee mail, making cute noises and regularly bumping into people.

几年后,此类故事已称不上什么奇闻异事。2017年1月,加拿大广播公司(CBC, Canadian Broadcasting Corporation)的员工为五名邮递机器人举行了一场温馨的退休派对。五位名为拉斯普京(Rasputin)、大锤(Basher)、拿丢丢(Move it or Lose it)、移动迷宫(Maze Mobile)和妈妈咪(Mom)的机器人过去25年里在公司走廊上来去穿梭,为员工派送信件邮包,偶尔发些可爱的小噪音,时常撞上行人。

There was cake. There were balloons. There was a nostalgic farewell video. There was even a leaving card with comments like “Thanks for making every day memorable” and “Beep! Beep! Beep!” The robots will likely spend their final years relaxing at one of the many museums that have requested them.

派对上有蛋糕、气球等,还放映着怀旧的告别视频,员工们甚至给机器人送上亲手写的送别卡,上面写着“感谢你们让每一天都充满回忆”,“嘟!嘟!嘟!”。这些机器人之后会从众多有意接收他们的博物馆中选择一间安享晚年。

Though they’re often portrayed as calculating job-stealers, it seems that there’s another side to the rise of the robots. From adorably clumsy office androids to precocious factory robots, we can’t help bonding with the machinery we work with. We feel sorry for our non-human colleagues when things go wrong, project personalities onto them, give them names and even debate over their gender. One medical robot-in-training, Sophia, has been granted citizenship of Saudi Arabia.

尽管精明的机器人有时被认为抢了人类的饭碗,但机器人的兴起也有益处。从可爱笨拙的办公室机器人到作业娴熟的工厂机器人,人类总会不自觉地与共事的机器建立感情。他们出问题时我们会觉得难过,我们赋予他们性格特征,给他们取名字甚至争论他们的性别。一位在训的医疗机器人“索菲亚”还被授予了沙特阿拉伯公民身份。

Not all collaborative robots, or “cobots”, were designed to be likeable. Many are just rectangular boxes, that lack faces, the ability to speak, as well as any artificial intelligence. Why do we care about them? And what does it mean for the future of work?

并非所有协作型机器人都造型可爱,有些甚至就是个长方体盒子,既没有脸不会说话,也谈不上人工智能,可为何人类会关心他们?这对我们未来的工作又意味着什么?

“When I first got this particular job, one of my colleagues had actually helped to design one of the robots I worked with,” says Olivia Osborne, a scientist specialising in nanotechnology at the University of California, Los Angeles (UCLA). “And he was the one who said, ‘Oh, this one’s got a mind of its own. It’s called Zelda’.”

奥斯本(Olivia Osborne)是加州大学洛杉矶分校(University of California, Los Angeles)的一名纳米技术科学家,她说:“我的一位同事参与设计了与我共事的摄影机器人塞尔达(Zelda)。我刚开始做这项工作的时候,他对我说‘噢,这是个有自己的想法的机器人’。”

Zelda’s job was to take photos of zebrafish embryos. These images could then be analysed by Osborne, who was studying the effects of toxic nanoparticles on their ability to develop normally. “You’re in a room literally just with machinery, and you start to get attached to it. You kind of feel sorry for it, because it’s not getting anything, apart from electricity, right?”

塞尔达的职责是斑马鱼胚胎照片的拍摄,而奥斯本则利用照片分析有毒纳米颗粒对胚胎正常发育的影响。奥斯本说:“工作间里就你和机器时,你慢慢就会对它们产生感情,甚至偶尔替它们难过。因为除了电,它们什么都得不到,不是吗?”

At the heart of all these unlikely friendships is the natural human tendency to personify all kinds of entities, including animals, plants, gods, the weather, and inanimate objects. At one end of the scale, this can lead to comparisons between peppers and politicians. At the other, it can lead to videos of polar bears petting dogs going viral.

这些看似不可能的友谊产生的根源是人类天性就倾向于把一切物体人格化,从普通的动植物,到神明、天气等,乃至无生命物体;浅显的例子像是政客有时被比作辣椒,更有甚者则是社交网站上北极熊宠溺小狗视频的迅速蹿红。

In the right conditions, we’ll even ascribe personalities to rocks. In one experiment that won the Ig Nobel Prize (a humorous award given to silly or strange achievements in science), apparently some  rocks were like “a big New York type businessman, rich, smooth, maybe a little shady”, while others were “a hippie”. Students were shown pictures of rocks and then asked which traits applied to them. To the researchers’ surprise, they had no trouble with this and each rock had a distinct personality.

在一些情况下,我们甚至不吝于给石头赋予人性。2016 年的一项实验获得了搞笑诺贝尔奖(IgNobel Prizes,是对诺贝尔奖的有趣模仿,其名称来自Ignoble“不名誉的”和Nobel Prize“诺贝尔奖”的结合,是一个授予无聊或是奇特科学成果的奖项);实验中,研究人员向一些学生展示三块石头的照片,并请他们试着想象并描述这三块石头各自的性格特征。结果出乎研究者预料的是,受访者全都顺利给出了答案,比如“这块像个纽约富商,有钱又能言善道,可能是个喜欢搞滑头的人”或者“那块像个嬉皮士”之类的。

But when it comes to robots, this behaviour reaches spectacular new heights. In many cases, we aren’t just humanising them, but empathising with them. Last year, the internet was alight with concern for a “suicidal” security robot that had “drowned” itself in the pond at a shopping centre in Georgetown, Washington DC. Steve the Knightscope security robot, who looks like a cross between a Doctor Who Dalek and Star Wars’ R2-D2, was left in a critical condition after stumbling on some steps. Its fellow co-workers rushed to its aid and dramatic footage of its rescue was captured by crowds of onlookers.

当对象是机器人时,这种行为就上升到了新高度。很多时候,我们不仅将机器人当作人,还会对他们产生共情。去年有段时间,网络舆论关注的热点之一是一个 “自杀”的安保机器人,它在美国华盛顿特区乔治城一家购物中心的水池里“自溺”身亡了。这位骑士视界(Knightscope)公司生产制造的安保机器人史蒂夫(Steve)长得神似电影《神秘博士》(Doctor Who)里的机器人戴立克(Dalek)和《星球大战》(Star Wars)里机器人R2-D2的结合体。当时它掉下台阶后,情况危殆,同事们立即冲过去搭救,而这戏剧性的一幕被围观群众拍了下来。

In fact, our empathy for them has some striking parallels with our feelings for fellow humans. In 2013, a team of scientists at Germany’s University of Duisburg-Essen scanned the brains of volunteers while they watched people being affectionate or violent towards a human, a robot, and an inanimate object. One staged scenario involved putting the victim’s “head” into a plastic bag and strangling them, while others included hugs and massages.

事实上,人们对机器人的同理心与对待同胞的感情非常类似。2013年,德国杜伊斯堡埃森大学(University of Duisburg-Essen)的科学家们也做了个实验:他们请志愿者们观看人们对人类/机器人/无生命物体的亲密行为(如拥抱、抚摸)和暴力行为(如把受害者的“头”放入塑料袋,掐住他/它们的脖子等),并对志愿者进行颅部扫描。

Though they didn’t feel quite as bad for the robots as they did for people, the same brain areas were active in the volunteers while watching the robots and the humans being tortured. In another study, the same team found that we have a tangible physical reaction to watching robots being harmed.

结果显示,被试们在看到机器人受折磨时,大脑活跃的区域跟看到人类被伤害时是一样的,只是活跃程度要稍微轻一些。这些科学家们开展的另一项研究也表明,我们看到机器人受伤害时会产生明显的身体反应。

If we’re going to go a step beyond simple empathy and actually befriend our robot colleagues, it’s thought that we need one of three things to happen. First of all, we need a motive.

然而,跳脱出简单的同理心,若是真要与机器人同事成为朋友,得至少满足以下三个条件其中之一。首个条件是动机。

Throughout human history, we have anointed canons, swords, boats – and, more recently, equipment such as cars, wind turbines, and robots – with human names. “A lot of this sort of usage goes back to people’s way of trying to relate to huge machines that are very difficult to handle, very treacherous,” says Peter McClure, who studies naming at the University of Nottingham. “They sort of christen them or nickname them, in order to exercise some sort of control over them. A sort of prophylactic thing, you know?”

纵观人类历史,万事万物人类都会给他们命个人名——古代的真经、剑器、船只等,及更近现代的汽车、风力机、机器人等等。英国诺丁汉大学研究命名的学者麦克卢尔(Peter McClure)说:“之所以命名大多源于人类希望与那些难以操控、变化莫测的大型机器建立关系,给物体授名或是取个昵称是为了能更好地控制它们,有些预防危险的意味。”

One example of this led to the coining of the word gun. Back in 12th-Century England, “Gunnild” was a popular name for a woman. A couple of hundred years later, this old Norse word – which meant “battle” – was given to a mechanical crossbow that defended Windsor Castle, the Lady Gunnilda. As its usage evolved, it was shortened to “gun” and given to hand-held firearms, which were themselves extremely dangerous and unpredictable.

以英文单词“gun”(枪)为例,该词的来历就是如此。12世纪时,“Gunnild”(贡希尔德)是英格兰地区常见的女性名字,它在古北欧语中意为“战争”。而几百年后,“Lady Gunnilda”(根尼尔达夫人)这个名字被用来指称一种保卫皇室温莎城堡的机械弩。这种用法其后进一步发展,被省略成了gun,指非常危险且难以预料的手持火器。

Indeed, McClure has noticed that machines tend to be given female names, possibly for sexist reasons. “I suspect that there’s some attempt to exercise male control over the female,” he says. In the modern world, this might explain the tradition of naming tunnel-boring machines – giant, 150-metre long monstrosities with several rows of sharp teeth – after women. The £14.8 billion ($19.6 billion) Crossrail project was dug by Ada, Phyllis, Victoria, Elizabeth, Mary, Sophia, Jessica and Ellie.

麦克卢尔发现,人们大多会给机器取个女性名字,兴许有性别歧视的缘由。他说,“我认为是因为男性想要控制女性”。这或许能解释通就连当今看着像个大怪物、体型巨大长达150米、有着锋利斗齿的隧道挖掘机都有个女性名字。英国耗资148亿英镑(196亿美元)的横贯铁路项目中,负责挖掘工作的是艾达(Ada)、菲莉斯(Phyllis)、维多利亚(Victoria)、伊丽莎白(Elizabeth)、玛丽(Mary)、索菲亚(Sophia)、洁西卡(Jessica)和埃莉(Ellie),通通都是女性名。

At the extreme end, this tendency to humanise the machines we rely on may lead to real emotional connections, as it did with Boomer in Iraq. “People anthropomorphise – get inside the heads of – objects constantly, and considering an object like a robot as human if that robot has an integral part in your survival is not that surprising,” says Lasana Harris, a psychologist at University College London.

人类将机器人格化的倾向若是发展到极致,则非常可能导致人与机器产生真切的情感联结,就如伊拉克的隆隆和他的战友们。伦敦大学学院(University College London)的心理学家哈里斯(Lasana Harris)说:“人们常将自己代入物品的头脑中,为它们赋予人性,把机器人等物品当做是人。而假使某个机器人是你生命不可或缺的一部分,这样做也就不足为奇了。”

Just like with other humans, it seems that these connections are strengthened by shared trauma. Mourning lost military robots isn’t at all unusual; on one occasion, the manufacturers were reportedly been sent a box of robot remains, along with a note saying “can you fix it?”

人类往往会对一同经历过创伤的同伴产生强烈的情感联结,对人如此,对机器人也不例外。像悼念隆隆那样,哀悼逝去的军用机器同伴并不是个案。制造商甚至收到过一盒机器人残骸,里面附了张纸条问道,“你们还能修好他吗?”

But another common motive is loneliness. Way back in our evolutionary past, seeking out other people to bond with was vital to our survival. This is thought to be the reason that social isolation or rejections, such as break-ups, often manifest themselves as physical pain; our bodies will do everything in their power to encourage us to make friends and keep them.

另一个常见的动机则是孤独。在人类的进化历程中,与其他人的联结对生存至关重要。有一些说法认为,遭遇社会隔离或排斥(譬如分手)后出现的那种肉体上的痛苦正是我们的身体在竭尽所能地告诉我们“要交朋友,维持情谊。”

When humans are unavailable, our social needs must be met elsewhere. This may be a volleyball on a desert island, or a robot in an empty lab. According to a report in Wired magazine, some people buy Roomba robotic vacuum cleaners for lonely relatives, to keep them company. One retired professor who lived alone considered hers as her companion.

如果这种需求在同胞中落了空,那我们的社会需求则要通过别的途径来满足。它可能是荒岛上的一个排球,也可能是空荡荡的实验室里的一台机器人。《连线》(Wired)杂志刊登的一篇报道里说,一些人会给独身亲友购买扫地机器人给他们作伴。而一位独居的退休教授就把她的扫地机器人伦巴(Roomba)当作同伴。

Finally, there need to be some tangible similarities between the robot and a human, so that our imaginations have something to go on. This might be the headlights and cooling grill of a car, which look like a face, or the ungainly attempts of a robot trying to place a box on a desk, repeatedly failing, then failing over.

最后一点是,如果机器人与人类之间有明显的外形或行为上的相似之处,我们的想象就能延续,比如长得像人脸的车前灯和车头散热器,又或者往桌子上摆东西的机器人管家,它屡试屡败又屡败屡试。

“If the object is unpredictable in its behaviour, such as a car that won’t start, or exceedingly animate, behaving in a way that suggests self-propelled motion or agency, then it is more likely to be anthropomorphised,” says Harris. “These effects can increase if there are very few similar objects behaving this way, and if the object’s behaviour is observed in lots of different situations.”

心理学家哈里斯说:“如果一件物体的行为难以预测,譬如一辆启动不了的汽车,又或者是过分活跃,看上去像是有自发行为或意识,它被人格化的几率就更高了。而倘若其他类似的物品极少出现同样的行为或它在不同的情境下被观察到有这样的行为,人类就更会将其人格化了。”

Again, an example might be the Roomba, which research shows is easily personified – despite the fact that it’s just a black and white disc that makes beeping noises. In a 2010 study, actors were filmed while they pretended to be vacuum cleaners with a variety of personalities, such as “bold” or “careless”.

上文中被退休教授人格化的扫地机器人伦巴就是案例之一。研究表明,它虽然只是个发出"嘟嘟"声音的黑白色圆盘,但很容易被赋予人性。在2010年的一项研究中,研究人员请一群演员扮演性格迥异的吸尘器,他们中有的大胆,有的粗心。

Then these videos were used by scientists to program the cleaners to give them these traits. For example, a calm robot might make less noise. When a group of Dutch members of the public were asked to guess each robot’s personality, they were surprisingly accurate.

而科学家们参考录下的表演视频来编写扫地机器人的程序,让机器人们也具备这些性格特征,例如性格安静的机器人发出噪声较小。之后科学家请了一群荷兰籍被试来猜测每个机器人的性格,而结果是他们居然猜得很准。

Once an object has been humanised, our relationships with them are remarkably similar to those we have with other humans. And this is where things start to get dangerous.

当一件物品被赋予了人性,我们与它的关系就跟与其他人类的关系非常相近,而潜在的风险也随之而来。

For a start, we’re susceptible to the same psychological biases. Just like people, research shows that robots are more likeable when they make mistakes. For example, participants in one study preferred the robots they were working with on a task when they violated social norms, by saying something odd, or malfunctioning, by providing faulty instructions.

首先,我们容易产生同样的心理偏见。有研究发现,比起行为毫无舛错的机器人,人们似乎觉得那些犯过错误的更讨喜一点。例如之前的一项研究中,当依照研究人员指示完成作业时,那些违背常识、言语奇怪或发生故障的机器人反而更招被试者的喜爱。

In this documentary by the BBC, there’s a revealing moment at the end where Li Yan, a migrant worker at an Alibaba packing centre in China, describes her feelings about the robots she works with. “I feel like the robots are like humans. They can have errors and emotions as well. They will need humans to pay attention to them and to monitor them.”

在英国广播公司出品的纪录片结尾,中国企业阿里巴巴公司打包中心的一名外来务工者李妍(音译)是这样形容她对机器人工友的感觉的:“我觉得机器人就跟人一样,也会犯错误也会有感情,它们需要人来关注它、监管它。”这或许能解释为何人们更偏爱犯错误的机器人。

It wouldn’t be ideal if people formed stronger bonds with their robot colleagues when they messed up tasks or turned out to be rubbish at their jobs. Many hospitals have begun hiring robot nurses to deliver drugs to patients. Though they’re just boxes on wheels, they’re remarkably human – able to open doors and call elevators, and ask for help when they get stuck. But what if they delivered the wrong drug to a patient?

可如果机器人同事搞砸了事情或是工作起来一团糟的话,人类与他们的亲密关系就不是什么好事情。目前很多医院开始聘请机器人护士给患者送药。这些护士虽说只是架在轮子上的盒子,但它们行事非常像人了。他们能够自己开门、按电梯,被卡住时还会求救。但若他们犯了错误,给病人送错了药呢?

It’s easy to envisage a scenario where even the wholesome bonds between soldiers and bomb disposal robots could become a problem. From running into gunfire to braving IEDs, military history is littered with the stories of heroes who paid the ultimate price to save their friends.

让我们再试想一下,如果拆弹机器人与士兵间感情很好,那也会有隐患。毕竟,军事历史上英雄在枪林弹雨中冲锋陷阵或是拆除爆炸装置时为了搭救朋友而牺牲自己的故事比比皆是。

If soldiers view their robot colleagues as people, this might mean feeling that they’re due the same protection from harm. After all, the opposite process – dehumanising – has been used for thousands of years to justify violence towards enemy troops. For example, during the Rwandan genocide, persecuted tribes were often compared to animals.

如果士兵把机器人战友当成是人,就会觉得机器人也该受到同样的保护,应免受伤害。然而,与"人性化"相反的“灭绝人性”已经在千百年来用作是暴力对待敌军的理由。例如,在卢旺达种族大屠杀期间,惨遭灭族的部落人民就是被当成动物地对待。

There’s already been talk of the possibility that humans could fall in love with robots, which would open up another set of sticky ethical problems. The EU is currently considering whether the most sophisticated robots, those with artificial intelligence, should be deemed “electronic persons” and granted certain human rights.

目前也有一些关于人类是否会爱上机器人的讨论,这也随之引发了一系列棘手的道德问题。欧盟近期也在探讨,是否该把那些拥有人工智能的、技术最复杂的机器人当成“电子人”,并赋予一定的人权。

But mostly, bonding with our robot colleagues is surely a good thing. Osborne was actually given the option of a human lab assistant, but preferred to work with Zelda, who she was less likely to argue with. “I had days when I was like this is awesome, we’re a great team,” she says. The robot was human enough to bond with, but also had some decidedly superhuman qualities, such as correcting Osborne’s mistakes.

综合来看,人类与机器人同事结谊是一桩好事。研究者奥斯本也可以选择人来担任自己的实验室助手,但她更喜欢跟塞尔达工作,因为他俩不会起争执。“有时候,我真的觉得太棒了,我们就是梦之队”。机器人一面有足够的人类特征,奥斯本对他产生了感情;一面则有一些超人类的特质,甚至发现一些奥斯本没意识到的错误。

“Sometimes I’d put in a wavelength [of light] that I wanted it to take a photo with – say I wanted the red wavelength – and it would be like ‘hmm, I don’t think you want that wavelength! I think you want 444 nanometres, or something’ and you’re like ‘I do want that, yes…’,” she says. “I could have gone through a whole ream of wavelengths and wondered why it wasn’t working. That’s something people need to realise – they’re very clever.”

奥斯本举例说:“偶尔我给他指示说,‘我想用某种(光的)波长照相,比方说红色光波',塞尔达会说'嗯……我猜,你想用的这个波长是否是444纳米?’然后我发现我就是想用444纳米!若不是他,我可能会用遍所有波长还没意识到问题究竟出在哪儿。人类要知道,机器人可是非常聪明的。”

As robots enter the workplace, people are beginning to realise that they can be valuable allies – with many of the benefits of a companion and co-worker, but less of the politics. Boomer’s funeral may have been the first for a robot, but it surely won’t be the last. RIP.

随着机器人被投入到人类的各种工作场合,人们会发现他们可以是重要的盟友,作为同伴或者同事都好处多多,但政治方面倒不一定。隆隆的葬礼是机器人界的第一个,但肯定不是最后一个。愿逝者安息。​

01电话 | 19909236459

微信:zhulili9966
QQ:1936295050