声明: 本站全部内容源自互联网,不进行任何盈利行为

仅做 整合 / 美化 处理

首页: https://dream-plan.cn

【TED】如何赋予AI力量而不是被它压倒

 

After 13.8 billion years of cosmic history, 在 138 亿年的历史之后, our universe has woken up 我们的宇宙终于觉醒了, and become aware of itself. 并开始有了自我意识。 From a small blue planet, 从一颗蓝色的小星球, tiny, conscious parts of our universe have begun gazing out into the cosmos 宇宙中那些有了微小意识的部分, 开始用它们的望远镜, with telescopes, 窥视整个宇宙, discovering something humbling. 从而有了谦卑的发现。 We've discovered that our universe is vastly grander 宇宙比我们祖先所想象的 than our ancestors imagined 要大得多, and that life seems to be an almost imperceptibly small perturbation 使得生命显得如同渺小的扰动, 小到足以被忽视, on an otherwise dead universe. 但若没有它们的存在, 宇宙也没了生命。 But we've also discovered something inspiring, 不过我们也发现了 一些振奋人心的事, which is that the technology we're developing has the potential 那就是我们所开发的技术, 有着前所未有的潜能 to help life flourish like never before, 去促使生命变得更加繁盛, not just for centuries but for billions of years, 不仅仅只有几个世纪, 而是持续了数十亿年; and not just on earth but throughout much of this amazing cosmos. 也不仅仅是地球上, 甚至是在整个浩瀚的宇宙中。 I think of the earliest life as "Life 1.0" 我把最早的生命 称之为 “生命 1.0”, because it was really dumb, 因为它那会儿还略显蠢笨, like bacteria, unable to learn anything during its lifetime. 就像细菌,在它们的一生中, 也不会学到什么东西。 I think of us humans as "Life 2.0" because we can learn, 我把我们人类称为 “生命 2.0”, 因为我们能够学习, which we in nerdy, geek speak, 用技术宅男的话来说, might think of as installing new software into our brains, 就像是在我们脑袋里 装了一个新的软件, like languages and job skills. 比如语言及工作技能。 "Life 3.0," which can design not only its software but also its hardware 而“生命 3.0” 不仅能开始设计 它的软件,甚至还可以创造其硬件。 of course doesn't exist yet. 当然,它目前还不存在。 But perhaps our technology has already made us "Life 2.1," 但是也许我们的科技 已经让我们走进了 “生命 2.1”, with our artificial knees, pacemakers and cochlear implants. 因为现在我们有了人工膝盖, 心脏起搏器以及耳蜗植入技术。 So let's take a closer look at our relationship with technology, OK? 我们一起来聊聊 人类和科技的关系吧! As an example, 举个例子, the Apollo 11 moon mission was both successful and inspiring, 阿波罗 11 号月球任务 很成功,令人备受鼓舞, showing that when we humans use technology wisely, 展示出了我们人类 对于使用科技的智慧, we can accomplish things that our ancestors could only dream of. 我们实现了很多 祖先们只能想象的事情。 But there's an even more inspiring journey 但还有一段更加 鼓舞人心的旅程, propelled by something more powerful than rocket engines, 由比火箭引擎更加强大的 东西所推动着, where the passengers aren't just three astronauts 乘客也不仅仅只是三个宇航员, but all of humanity. 而是我们全人类。 Let's talk about our collective journey into the future 让我们来聊聊与人工智能 一起走向未来的 with artificial intelligence. 这段旅程。 My friend Jaan Tallinn likes to point out that just as with rocketry, 我的朋友扬·塔林(Jaan Tallinn)常说, 这就像是火箭学一样, it's not enough to make our technology powerful. 只让我们的科技 拥有强大的力量是不够的。 We also have to figure out, if we're going to be really ambitious, 如果我们有足够的 雄心壮志,就应当想出 how to steer it 如何控制它们的方法, and where we want to go with it. 希望它朝着怎样的方向前进。 So let's talk about all three for artificial intelligence: 那么对于人工智能, 我们先来谈谈这三点: the power, the steering and the destination. 力量,操控和目的地。 Let's start with the power. 我们先来说力量。 I define intelligence very inclusively -- 我对于人工智能的定义非常全面—— simply as our ability to accomplish complex goals, 就是我们能够完成复杂目标的能力, because I want to include both biological and artificial intelligence. 因为我想把生物学 和人工智能都包含进去。 And I want to avoid the silly carbon-chauvinism idea 我还想要避免愚蠢的 碳沙文主义的观点, that you can only be smart if you're made of meat. 即你认为如果你很聪明, 你就一定有着肉身。 It's really amazing how the power of AI has grown recently. 人工智能的力量 在近期的发展十分惊人。 Just think about it. 试想一下。 Not long ago, robots couldn't walk. 甚至在不久以前, 机器人还不能走路呢。 Now, they can do backflips. 现在,它们居然开始后空翻了。 Not long ago, 不久以前, we didn't have self-driving cars. 我们还没有全自动驾驶汽车。 Now, we have self-flying rockets. 现在,我们都有 自动飞行的火箭了。 Not long ago, 不久以前, AI couldn't do face recognition. 人工智能甚至不能完成脸部识别。 Now, AI can generate fake faces 现在,人工智能都开始 生成仿真面貌了, and simulate your face saying stuff that you never said. 并模拟你的脸部表情, 说出你从未说过的话。 Not long ago, 不久以前, AI couldn't beat us at the game of Go. 人工智能还不能在围棋中战胜人类, Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games 然后,谷歌的DeepMind推出的 AlphaZero 就掌握了人类三千多年的 and Go wisdom, 围棋比赛和智慧, ignored it all and became the world's best player by just playing against itself. 通过和自己对战的方式轻松秒杀我们, 成了全球最厉害的围棋手。 And the most impressive feat here wasn't that it crushed human gamers, 这里最让人印象深刻的部分, 不是它击垮了人类棋手, but that it crushed human AI researchers 而是它击垮了人类人工智能的研究者, who had spent decades handcrafting game-playing software. 这些研究者花了数十年 手工打造了下棋软件。 And AlphaZero crushed human AI researchers not just in Go but even at chess, 此外,AlphaZero也在国际象棋比赛中 轻松战胜了人类的人工智能研究者们, which we have been working on since 1950. 我们从 1950 年 就开始致力于国际象棋研究。 So all this amazing recent progress in AI really begs the question: 所以近来,这些惊人的 人工智能进步,让大家不禁想问: How far will it go? 它到底能达到怎样的程度? I like to think about this question 我在思考这个问题时, in terms of this abstract landscape of tasks, 想从工作任务中的抽象地景来切入, where the elevation represents how hard it is for AI to do each task 图中的海拔高度表示 人工智能要把每一项工作 at human level, 做到人类的水平的难度, and the sea level represents what AI can do today. 海平面表示现今的 人工智能所达到的水平。 The sea level is rising as AI improves, 随着人工智能的进步, 海平面会上升, so there's a kind of global warming going on here in the task landscape. 所以在这工作任务地景上, 有着类似全球变暖的后果。 And the obvious takeaway is to avoid careers at the waterfront -- 很显然,我们要避免 从事那些近海区的工作—— (Laughter) (笑声) which will soon be automated and disrupted. 这些工作不会一直由人来完成, 迟早要被自动化取代。 But there's a much bigger question as well. 然而同时,还存在一个很大的问题, How high will the water end up rising? 水平面最后会升到多高? Will it eventually rise to flood everything, 它最后是否会升高到淹没一切, matching human intelligence at all tasks. 人工智能会不会 最终能胜任所有的工作? This is the definition of artificial general intelligence -- 这就成了通用人工智能 (Artificial general intelligence)—— AGI, 缩写是 AGI, which has been the holy grail of AI research since its inception. 从一开始它就是 人工智能研究最终的圣杯。 By this definition, people who say, 根据这个定义,有人说, "Ah, there will always be jobs that humans can do better than machines," “总是有些工作, 人类可以做得比机器好的。” are simply saying that we'll never get AGI. 意思就是,我们永远不会有 AGI。 Sure, we might still choose to have some human jobs 当然,我们可以仍然 保留一些人类的工作, or to give humans income and purpose with our jobs, 或者说,通过我们的工作 带给人类收入和生活目标, but AGI will in any case transform life as we know it 但是不论如何, AGI 都会转变我们对生命的认知, with humans no longer being the most intelligent. 人类或许不再是最有智慧的了。 Now, if the water level does reach AGI, 如果海平面真的 上升到 AGI 的高度, then further AI progress will be driven mainly not by humans but by AI, 那么进一步的人工智能进展 将会由人工智能来引领,而非人类, which means that there's a possibility 那就意味着有可能, that further AI progress could be way faster 进一步提升人工智能水平 将会进行得非常迅速, than the typical human research and development timescale of years, 甚至超越用年份来计算时间的 典型人类研究和发展, raising the controversial possibility of an intelligence explosion 提高到一种极具争议性的可能性, 那就是智能爆炸, where recursively self-improving AI 即能够不断做自我改进的人工智能 rapidly leaves human intelligence far behind, 很快就会遥遥领先人类, creating what's known as superintelligence. 创造出所谓的超级人工智能。 Alright, reality check: 好了,回归现实: Are we going to get AGI any time soon? 我们很快就会有 AGI 吗? Some famous AI researchers, like Rodney Brooks, 一些著名的 AI 研究者, 如罗德尼 · 布鲁克斯 (Rodney Brooks), think it won't happen for hundreds of years. 认为一百年内是没有可能的。 But others, like Google DeepMind founder Demis Hassabis, 但是其他人,如谷歌DeepMind公司的 创始人德米斯 · 哈萨比斯(Demis Hassabis) are more optimistic 就比较乐观, and are working to try to make it happen much sooner. 且努力想要它尽早实现。 And recent surveys have shown that most AI researchers 近期的调查显示, 大部分的人工智能研究者 actually share Demis's optimism, 其实都和德米斯一样持乐观态度, expecting that we will get AGI within decades, 预期我们十年内就会有 AGI, so within the lifetime of many of us, 所以我们中许多人 在有生之年就能看到, which begs the question -- and then what? 这就让人不禁想问—— 那么接下来呢? What do we want the role of humans to be 如果什么事情机器 都能做得比人好, if machines can do everything better and cheaper than us? 成本也更低,那么人类 又该扮演怎样的角色? The way I see it, we face a choice. 依我所见,我们面临一个选择。 One option is to be complacent. 选择之一是要自我满足。 We can say, "Oh, let's just build machines that can do everything we can do 我们可以说,“我们来建造机器, 让它来帮助我们做一切事情, and not worry about the consequences. 不要担心后果, Come on, if we build technology that makes all humans obsolete, 拜托,如果我们能打造出 让全人类都被淘汰的机器, what could possibly go wrong?" 还有什么会出错吗?” (Laughter) (笑声) But I think that would be embarrassingly lame. 但我觉得那样真是差劲到悲哀。 I think we should be more ambitious -- in the spirit of TED. 我们认为我们应该更有野心—— 带着 TED 的精神。 Let's envision a truly inspiring high-tech future 让我们来想象一个 真正鼓舞人心的高科技未来, and try to steer towards it. 并试着朝着它前进。 This brings us to the second part of our rocket metaphor: the steering. 这就把我们带到了火箭比喻的 第二部分:操控。 We're making AI more powerful, 我们让人工智能的力量更强大, but how can we steer towards a future 但是我们要如何朝着 人工智能帮助人类未来更加繁盛, where AI helps humanity flourish rather than flounder? 而非变得挣扎的目标不断前进呢? To help with this, 为了协助实现它, I cofounded the Future of Life Institute. 我联合创办了 “未来生命研究所” (Future of Life Institute)。 It's a small nonprofit promoting beneficial technology use, 它是个小型的非营利机构, 旨在促进有益的科技使用, and our goal is simply for the future of life to exist 我们的目标很简单, 就是希望生命的未来能够存在, and to be as inspiring as possible. 且越是鼓舞人心越好。 You know, I love technology. 你们知道的,我爱科技。 Technology is why today is better than the Stone Age. 现今之所以比石器时代更好, 就是因为科技。 And I'm optimistic that we can create a really inspiring high-tech future ... 我很乐观的认为我们能创造出 一个非常鼓舞人心的高科技未来…… if -- and this is a big if -- 如果——这个 “如果” 很重要—— if we win the wisdom race -- 如果我们能赢得这场 关于智慧的赛跑—— the race between the growing power of our technology 这场赛跑的两位竞争者 便是我们不断成长的科技力量 and the growing wisdom with which we manage it. 以及我们用来管理科技的 不断成长的智慧。 But this is going to require a change of strategy 但这也需要策略上的改变。 because our old strategy has been learning from mistakes. 因为我们以往的策略 往往都是从错误中学习的。 We invented fire, 我们发明了火, screwed up a bunch of times -- 因为搞砸了很多次—— invented the fire extinguisher. 我们发明出了灭火器。 (Laughter) (笑声) We invented the car, screwed up a bunch of times -- 我们发明了汽车, 又一不小心搞砸了很多次—— invented the traffic light, the seat belt and the airbag, 发明了红绿灯,安全带 和安全气囊, but with more powerful technology like nuclear weapons and AGI, 但对于更强大的科技, 像是核武器和 AGI, learning from mistakes is a lousy strategy, 要去从错误中学习, 似乎是个比较糟糕的策略, don't you think? 你们怎么看? (Laughter) (笑声) It's much better to be proactive rather than reactive; 事前的准备比事后的 补救要好得多; plan ahead and get things right the first time 提早做计划,争取一次成功, because that might be the only time we'll get. 因为有时我们或许 没有第二次机会。 But it is funny because sometimes people tell me, 但有趣的是, 有时候有人告诉我。 "Max, shhh, don't talk like that. “麦克斯,嘘——别那样说话。 That's Luddite scaremongering." 那是勒德分子(注:持有反机械化, 反自动化观点的人)在制造恐慌。“ But it's not scaremongering. 但这并不是制造恐慌。 It's what we at MIT call safety engineering. 在麻省理工学院, 我们称之为安全工程。 Think about it: 想想看: before NASA launched the Apollo 11 mission, 在美国航天局(NASA) 部署阿波罗 11 号任务之前, they systematically thought through everything that could go wrong 他们全面地设想过 所有可能出错的状况, when you put people on top of explosive fuel tanks 毕竟是要把人类放进 易燃易爆的太空舱里, and launch them somewhere where no one could help them. 再将他们发射上 一个无人能助的境遇。 And there was a lot that could go wrong. 可能出错的情况非常多, Was that scaremongering? 那是在制造恐慌吗? No. 不是。 That's was precisely the safety engineering 那正是在做安全工程的工作, that ensured the success of the mission, 以确保任务顺利进行, and that is precisely the strategy I think we should take with AGI. 这正是我认为处理 AGI 时 应该采取的策略。 Think through what can go wrong to make sure it goes right. 想清楚什么可能出错, 然后避免它的发生。 So in this spirit, we've organized conferences, 基于这样的精神, 我们组织了几场大会, bringing together leading AI researchers and other thinkers 邀请了世界顶尖的人工智能研究者 和其他有想法的专业人士, to discuss how to grow this wisdom we need to keep AI beneficial. 来探讨如何发展这样的智慧, 从而确保人工智能对人类有益。 Our last conference was in Asilomar, California last year 我们最近的一次大会 去年在加州的阿西洛玛举行, and produced this list of 23 principles 我们得出了 23 条原则, which have since been signed by over 1,000 AI researchers 自此已经有超过 1000 位 人工智能研究者,以及核心企业的 and key industry leaders, 领导人参与签署。 and I want to tell you about three of these principles. 我想要和各位分享 其中的三项原则。 One is that we should avoid an arms race and lethal autonomous weapons. 第一,我们需要避免军备竞赛, 以及致命的自动化武器出现。 The idea here is that any science can be used for new ways of helping people 其中的想法是,任何科学都可以 用新的方式来帮助人们, or new ways of harming people. 同样也可以以新的方式 对我们造成伤害。 For example, biology and chemistry are much more likely to be used 例如,生物和化学更可能被用来 for new medicines or new cures than for new ways of killing people, 制造新的医药用品, 而非带来新的杀人方法, because biologists and chemists pushed hard -- 因为生物学家和 化学家很努力—— and successfully -- 也很成功地——在推动 for bans on biological and chemical weapons. 禁止生化武器的出现。 And in the same spirit, 基于同样的精神, most AI researchers want to stigmatize and ban lethal autonomous weapons. 大部分的人工智能研究者也在 试图指责和禁止致命的自动化武器。 Another Asilomar AI principle 另一条阿西洛玛 人工智能会议的原则是, is that we should mitigate AI-fueled income inequality. 我们应该要减轻 由人工智能引起的收入不平等。 I think that if we can grow the economic pie dramatically with AI 我认为,我们能够大幅度利用 人工智能发展出一块经济蛋糕, and we still can't figure out how to divide this pie 但却没能相处如何来分配它 so that everyone is better off, 才能让所有人受益, then shame on us. 那可太丢人了。 (Applause) (掌声) Alright, now raise your hand if your computer has ever crashed. 那么问一个问题,如果 你的电脑有死机过的,请举手。 (Laughter) (笑声) Wow, that's a lot of hands. 哇,好多人举手。 Well, then you'll appreciate this principle 那么你们就会感谢这条准则, that we should invest much more in AI safety research, 我们应该要投入更多 以确保对人工智能安全性的研究, because as we put AI in charge of even more decisions and infrastructure, 因为我们让人工智能在主导 更多决策以及基础设施时, we need to figure out how to transform today's buggy and hackable computers 我们要了解如何将 会出现程序错误以及有漏洞的电脑, into robust AI systems that we can really trust, 转化为可靠的人工智能, because otherwise, 否则的话, all this awesome new technology can malfunction and harm us, 这些了不起的新技术 就会出现故障,反而伤害到我们, or get hacked and be turned against us. 或被黑入以后转而对抗我们。 And this AI safety work has to include work on AI value alignment, 这项人工智能安全性的工作 必须包含对人工智能价值观的校准, because the real threat from AGI isn't malice, 因为 AGI 会带来的威胁 通常并非出于恶意—— like in silly Hollywood movies, 就像是愚蠢的 好莱坞电影中表现的那样, but competence -- 而是源于能力—— AGI accomplishing goals that just aren't aligned with ours. AGI 想完成的目标 与我们的目标背道而驰。 For example, when we humans drove the West African black rhino extinct, 例如,当我们人类促使了 西非的黑犀牛灭绝时, we didn't do it because we were a bunch of evil rhinoceros haters, did we? 并不是因为我们是邪恶 且痛恨犀牛的家伙,对吧? We did it because we were smarter than them 我们能够做到 只是因为我们比它们聪明, and our goals weren't aligned with theirs. 而我们的目标和它们的目标相违背。 But AGI is by definition smarter than us, 但是 AGI 在定义上就比我们聪明, so to make sure that we don't put ourselves in the position of those rhinos 所以必须确保我们别让 自己落到了黑犀牛的境遇, if we create AGI, 如果我们发明 AGI, we need to figure out how to make machines understand our goals, 首先就要解决如何 让机器明白我们的目标, adopt our goals and retain our goals. 选择采用我们的目标, 并一直跟随我们的目标。 And whose goals should these be, anyway? 不过,这些目标到底是谁的目标? Which goals should they be? 这些目标到底是什么目标? This brings us to the third part of our rocket metaphor: the destination. 这就引出了火箭比喻的 第三部分:目的地。 We're making AI more powerful, 我们要让人工智能的力量更强大, trying to figure out how to steer it, 试图想办法来操控它, but where do we want to go with it? 但我们到底想把它带去何方呢? This is the elephant in the room that almost nobody talks about -- 这就像是房间里的大象, 显而易见却无人问津—— not even here at TED -- 甚至在 TED 也没人谈论—— because we're so fixated on short-term AI challenges. 因为我们都把目光 聚焦于短期的人工智能挑战。 Look, our species is trying to build AGI, 你们看,我们人类 正在试图建造 AGI, motivated by curiosity and economics, 由我们的好奇心 以及经济需求所带动, but what sort of future society are we hoping for if we succeed? 但如果我们能成功, 希望能创造出怎样的未来社会呢? We did an opinion poll on this recently, 最近对于这一点, 我们做了一次观点投票, and I was struck to see 结果很让我惊讶, that most people actually want us to build superintelligence: 大部分的人其实希望 我们能打造出超级人工智能: AI that's vastly smarter than us in all ways. 在各个方面都 比我们聪明的人工智能, What there was the greatest agreement on was that we should be ambitious 大家甚至一致希望 我们应该更有野心, and help life spread into the cosmos, 并协助生命在宇宙中的拓展, but there was much less agreement about who or what should be in charge. 但对于应该由谁,或者什么来主导, 大家就各持己见了。 And I was actually quite amused 有件事我觉得非常奇妙, to see that there's some some people who want it to be just machines. 就是我看到有些人居然表示 让机器主导就好了。 (Laughter) (笑声) And there was total disagreement about what the role of humans should be, 至于人类该扮演怎样的角色, 大家的意见简直就是大相径庭, even at the most basic level, 即使在最基础的层面上也是, so let's take a closer look at possible futures 那么让我们进一步 去看看这些可能的未来, that we might choose to steer toward, alright? 我们可能去往目的地,怎么样? So don't get me wrong here. 别误会我的意思, I'm not talking about space travel, 我不是在谈论太空旅行, merely about humanity's metaphorical journey into the future. 只是打个比方, 人类进入未来的这个旅程。 So one option that some of my AI colleagues like 我的一些研究人工智能的同事 很喜欢的一个选择就是 is to build superintelligence and keep it under human control, 打造人工智能, 并确保它被人类所控制, like an enslaved god, 就像被奴役起来的神一样, disconnected from the internet 网络连接被断开, and used to create unimaginable technology and wealth 为它的操控者创造出无法想象的 for whoever controls it. 科技和财富。 But Lord Acton warned us 但是艾克顿勋爵(Lord Acton) 警告过我们, that power corrupts, and absolute power corrupts absolutely, 权力会带来腐败, 绝对的权力终将带来绝对的腐败, so you might worry that maybe we humans just aren't smart enough, 所以也许你会担心 我们人类就是还不够聪明, or wise enough rather, 或者不够智慧, to handle this much power. 无法妥善处理过多的权力。 Also, aside from any moral qualms you might have 还有,除了对于奴役带来的优越感, about enslaving superior minds, 你可能还会产生道德上的忧虑, you might worry that maybe the superintelligence could outsmart us, 你也许会担心人工智能 能够在智慧上超越我们, break out and take over. 奋起反抗,并取得我们的控制权。 But I also have colleagues who are fine with AI taking over 但是我也有同事认为, 让人工智能来操控一切也无可厚非, and even causing human extinction, 造成人类灭绝也无妨, as long as we feel the the AIs are our worthy descendants, 只要我们觉得人工智能 配得上成为我们的后代, like our children. 就像是我们的孩子。 But how would we know that the AIs have adopted our best values 但是我们如何才能知道 人工智能汲取了我们最好的价值观, and aren't just unconscious zombies tricking us into anthropomorphizing them? 而不是只是一个无情的僵尸, 让我们误以为它们有人性? Also, shouldn't those people who don't want human extinction 此外,那些绝对不想 看到人类灭绝的人, have a say in the matter, too? 对此应该也有话要说吧? Now, if you didn't like either of those two high-tech options, 如果这两个高科技的选择 都不是你所希望的, it's important to remember that low-tech is suicide 请记得,从宇宙历史的角度来看, from a cosmic perspective, 低级的科技如同自杀, because if we don't go far beyond today's technology, 因为如果我们不能 远远超越今天的科技, the question isn't whether humanity is going to go extinct, 问题就不再是人类是否会灭绝, merely whether we're going to get taken out 而是让我们灭绝的会是下一次 by the next killer asteroid, supervolcano 巨型流星撞击地球, 还是超级火山爆发, or some other problem that better technology could have solved. 亦或是一些其他本该可以 由更好的科技来解决的问题。 So, how about having our cake and eating it ... 所以,为什么不干脆 坐享其成…… with AGI that's not enslaved 使用非奴役的 AGI, but treats us well because its values are aligned with ours? 因为价值观和我们一致, 愿意和我们并肩作战的 AGI? This is the gist of what Eliezer Yudkowsky has called "friendly AI," 尤多科斯基(Eliezer Yudkowsky) 所谓的 “友善的人工智能” 就是如此, and if we can do this, it could be awesome. 若我们能做到这点,那简直太棒了。 It could not only eliminate negative experiences like disease, poverty, 它或许不会解决负面的影响, 如疾病,贫穷, crime and other suffering, 犯罪或是其它, but it could also give us the freedom to choose 但是它会给予我们自由, from a fantastic new diversity of positive experiences -- 让我们从那些正面的 境遇中去选择—— basically making us the masters of our own destiny. 让我们成为自己命运的主人。 So in summary, 总的来说, our situation with technology is complicated, 在科技上,我们的现状很复杂, but the big picture is rather simple. 但是若从大局来看,又很简单。 Most AI researchers expect AGI within decades, 多数人工智能的研究者认为 AGI 能在未来十年内实现, and if we just bumble into this unprepared, 如果我们没有事先 准备好去面对它们, it will probably be the biggest mistake in human history -- 就可能成为人类历史上 最大的一个错误—— let's face it. 我们要面对现实。 It could enable brutal, global dictatorship 它可能导致残酷的 全球独裁主义变成现实, with unprecedented inequality, surveillance and suffering, 造成前所未有的 不平等监控和苦难, and maybe even human extinction. 或许甚至导致人类灭绝。 But if we steer carefully, 但是如果我们能小心操控, we could end up in a fantastic future where everybody's better off: 我们可能会有个美好的未来, 人人都会受益的未来, the poor are richer, the rich are richer, 穷人变得富有,富人变得更富有, everybody is healthy and free to live out their dreams. 每个人都是健康的, 能自由地去实现他们的梦想。 Now, hang on. 不过先别急。 Do you folks want the future that's politically right or left? 你们希望未来的政治 是左派还是右派? Do you want the pious society with strict moral rules, 你们想要一个有 严格道德准则的社会, or do you an hedonistic free-for-all, 还是一个人人可参与的 享乐主义社会, more like Burning Man 24/7? 更像是个无时无刻 不在运转的火人盛会? Do you want beautiful beaches, forests and lakes, 你们想要美丽的海滩、森林和湖泊, or would you prefer to rearrange some of those atoms with the computers, 还是偏好用电脑 重新排列组成新的原子, enabling virtual experiences? 实现真正的虚拟现实? With friendly AI, we could simply build all of these societies 有了友善的人工智能, 我们就能轻而易举地建立这些社会, and give people the freedom to choose which one they want to live in 让大家有自由去选择 想要生活在怎样的社会里, because we would no longer be limited by our intelligence, 因为我们不会再受到 自身智慧的限制, merely by the laws of physics. 唯一的限制只有物理的定律。 So the resources and space for this would be astronomical -- 所以资源和空间会取之不尽—— literally. 毫不夸张。 So here's our choice. 我们的选择如下: We can either be complacent about our future, 我们可以对未来感到自满, taking as an article of blind faith 带着盲目的信念, that any new technology is guaranteed to be beneficial, 相信任何科技必定是有益的, and just repeat that to ourselves as a mantra over and over and over again 并将这个想法当作 圣歌一般,不断默念, as we drift like a rudderless ship towards our own obsolescence. 让我们像漫无目的船只, 驶向自我消亡的结局。 Or we can be ambitious -- 或者,我们可以拥有雄心壮志—— thinking hard about how to steer our technology 努力去找到操控我们科技的方法, and where we want to go with it 以及向往的目的地, to create the age of amazement. 创造出真正令人惊奇的时代。 We're all here to celebrate the age of amazement, 我们相聚在这里, 赞颂这令人惊奇的时代, and I feel that its essence should lie in becoming not overpowered 我觉得,它的精髓应当是, 让科技赋予我们力量, but empowered by our technology. 而非反过来受控于它。 Thank you. 谢谢大家。 (Applause) (掌声)

萌ICP备20223985号