声明: 本站全部内容源自互联网,不进行任何盈利行为

仅做 整合 / 美化 处理

首页: https://dream-plan.cn

【TED】我们可以建造AI,而不会失去对它的控制吗?

 

I'm going to talk about a failure of intuition 我想谈论一种我们 很多人都经历过的 that many of us suffer from. 来自于直觉上的失误。 It's really a failure to detect a certain kind of danger. 它让人们无法察觉到 一种特定危险的存在。 I'm going to describe a scenario 我要向大家描述一个情景, that I think is both terrifying 一个我觉得既令人害怕, and likely to occur, 却又很可能发生的情景。 and that's not a good combination, 这样一个组合的出现, as it turns out. 显然不是一个好的征兆。 And yet rather than be scared, most of you will feel 不过,在座的大部分人都会觉得, that what I'm talking about is kind of cool. 我要谈论的这件事其实挺酷的。 I'm going to describe how the gains we make 我将描述我们从人工智能中 in artificial intelligence 获得的好处, could ultimately destroy us. 将怎样彻底地毁灭我们。 And in fact, I think it's very difficult to see how they won't destroy us 事实上,想看到人工智能 最终不摧毁我们是很难的, or inspire us to destroy ourselves. 或者说它必将驱使我们自我毁灭。 And yet if you're anything like me, 如果你和我有共同点, you'll find that it's fun to think about these things. 你会发现思考这些问题 是相当有趣的。 And that response is part of the problem. 而这种反应就是问题的一部分。 OK? That response should worry you. 因为这种想法应该使你感到担忧。 And if I were to convince you in this talk 假如我想在这个演讲中让你们相信, that we were likely to suffer a global famine, 我们因为气候变化或者其他灾难, either because of climate change or some other catastrophe, 很可能会遭受全球性的饥荒, and that your grandchildren, or their grandchildren, 同时,你们的子孙后辈 are very likely to live like this, 都可能在这样的饥荒中挣扎求生, you wouldn't think, 你们就不会觉得 "Interesting. “真有趣, I like this TED Talk." 我喜欢这个TED演讲。” Famine isn't fun. 因为饥荒一点都不有趣。 Death by science fiction, on the other hand, is fun, 不过,科幻小说中的死亡 往往却引人入胜。 and one of the things that worries me most about the development of AI at this point 所以我现在最担心的一个问题是, is that we seem unable to marshal an appropriate emotional response 人们对人工智能的发展将带来的危险, to the dangers that lie ahead. 似乎还没有形成一个正确的认识。 I am unable to marshal this response, and I'm giving this talk. 我也同样如此,所以我想 在这个演讲中和大家一起探讨。 It's as though we stand before two doors. 我们就像站在了两扇门前。 Behind door number one, 在第一扇门后面, we stop making progress in building intelligent machines. 我们停下打造智能机器的脚步。 Our computer hardware and software just stops getting better for some reason. 某些原因也使我们停止了 对电脑软件和硬件的升级。 Now take a moment to consider why this might happen. 现在让我们想一下为什么会这样。 I mean, given how valuable intelligence and automation are, 我的意思是,当我们认识到 智能和自动化不可估量的价值时, we will continue to improve our technology if we are at all able to. 我们总会竭尽所能的改善这些科技。 What could stop us from doing this? 那么,什么会使我们停下脚步呢? A full-scale nuclear war? 一场大规模的核战争? A global pandemic? 一次全球性的瘟疫? An asteroid impact? 一个小行星撞击了地球? Justin Bieber becoming president of the United States? 或者是贾斯汀·比伯成为了美国总统? (Laughter) (笑声) The point is, something would have to destroy civilization as we know it. 重点是,总有一个事物 会摧毁人类现有的文明。 You have to imagine how bad it would have to be 你需要思考这个灾难究竟有多恐怖, to prevent us from making improvements in our technology 才会永久性地阻止我们 permanently, 发展科技, generation after generation. 永久性的。 Almost by definition, this is the worst thing 光想想它, 就觉得这将是人类历史上 that's ever happened in human history. 能发生的最惨绝人寰的事了。 So the only alternative, 那么,我们唯一剩下的选择, and this is what lies behind door number two, 就藏在第二扇门的后面, is that we continue to improve our intelligent machines 那就是我们持续 改进我们的智能机器, year after year after year. 永不停歇。 At a certain point, we will build machines that are smarter than we are, 在将来的某一天,我们会 造出比我们更聪明的机器, and once we have machines that are smarter than we are, 一旦我们有了 比我们更聪明的机器, they will begin to improve themselves. 它们将进行自我改进。 And then we risk what the mathematician IJ Good called 然后我们就会承担着 数学家IJ Good 所说的 an "intelligence explosion," “智能爆炸”的风险, that the process could get away from us. (科技进步的) 进程将不再受我们的控制。 Now, this is often caricatured, as I have here, 现在我们时常会看到 这样一些讽刺漫画, as a fear that armies of malicious robots 我们总会担心受到一些不怀好意的 will attack us. 机器人军队的攻击。 But that isn't the most likely scenario. 但这不是最可能出现的事情。 It's not that our machines will become spontaneously malevolent. 我们的机器不会自动变得邪恶。 The concern is really that we will build machines 所以,我们唯一的顾虑就是 我们将会打造 that are so much more competent than we are 比我们人类更有竞争力的机器。 that the slightest divergence between their goals and our own 而一旦我们和它们的目标不一致, could destroy us. 我们将会被摧毁。 Just think about how we relate to ants. 想想我们与蚂蚁的关系吧。 We don't hate them. 我们不讨厌它们, We don't go out of our way to harm them. 我们不会去主动去伤害它们。 In fact, sometimes we take pains not to harm them. 实际上,我们经常会尽量 避免伤害蚂蚁。 We step over them on the sidewalk. 我们会选择从它们身边走过。 But whenever their presence 但只要它们的存在 seriously conflicts with one of our goals, 妨碍到了我们达成目标, let's say when constructing a building like this one, 比如说当我们在建造这样一个建筑, we annihilate them without a qualm. 我们会毫不手软地杀掉它们。 The concern is that we will one day build machines 所以我们的顾虑是,终将有一天 我们打造的机器, that, whether they're conscious or not, 不管它们是否有意识, 它们终将会以 could treat us with similar disregard. 我们对待蚂蚁的方式 来对待我们。 Now, I suspect this seems far-fetched to many of you. 我想很多人会说这很遥远。 I bet there are those of you who doubt that superintelligent AI is possible, 我打赌你们中有些人还会 怀疑超级人工智能是否可能实现, much less inevitable. 认为我是在小题大做。 But then you must find something wrong with one of the following assumptions. 但是你很快会发现以下这些 假设中的某一个是有问题的。 And there are only three of them. 下面是仅有的三种假设: Intelligence is a matter of information processing in physical systems. 第一,智慧可以被看做 物理系统中的信息处理过程。 Actually, this is a little bit more than an assumption. 事实上,这不仅仅是一个假设。 We have already built narrow intelligence into our machines, 我们已经在有些机器中 嵌入了智能系统, and many of these machines perform 这些机器中很多已经 at a level of superhuman intelligence already. 有着超越普通人的智慧了。 And we know that mere matter 而且,我们也知道任何一点小事 can give rise to what is called "general intelligence," 都可以引发所谓的“普遍智慧”, an ability to think flexibly across multiple domains, 这是一种可以在不同领域间 灵活思考的能力, because our brains have managed it. Right? 因为我们的大脑已经 成功做到了这些。对吧? I mean, there's just atoms in here, 我的意思是, 大脑里其实都是原子, and as long as we continue to build systems of atoms 只要我们继续建造这些原子体系, that display more and more intelligent behavior, 我们就能实现越来越多的智慧行为, we will eventually, unless we are interrupted, 我们最终将会, 当然除非我们被干扰, we will eventually build general intelligence 我们最终将会给我们的机器赋予 into our machines. 广泛意义上的智能。 It's crucial to realize that the rate of progress doesn't matter, 我们要知道这个进程的速度并不重要, because any progress is enough to get us into the end zone. 因为任何进程都足够 让我们走进死胡同。 We don't need Moore's law to continue. We don't need exponential progress. 甚至不需要考虑摩尔定律, 也不需要用指数函数来衡量, We just need to keep going. 这一切顺其自然都会发生。 The second assumption is that we will keep going. 第二个假设是,我们会一直创新。 We will continue to improve our intelligent machines. 去继续改进我们的智能机器。 And given the value of intelligence -- 由于智慧的价值就是—— I mean, intelligence is either the source of everything we value 提供我们所珍爱的事物, or we need it to safeguard everything we value. 或是用于保护我们所珍视的一切。 It is our most valuable resource. 智慧就是我们最有价值的资源。 So we want to do this. 所以我们想继续革新它。 We have problems that we desperately need to solve. 因为我们有很多需要 迫切解决的问题。 We want to cure diseases like Alzheimer's and cancer. 我们想要治愈像阿兹海默症 和癌症这样的疾病, We want to understand economic systems. We want to improve our climate science. 我们想要了解经济系统, 想要改善我们的气候科学, So we will do this, if we can. 所以只要可能, 我们就会将革新继续下去。 The train is already out of the station, and there's no brake to pull. 而且革新的列车早已驶出, 车上却没有刹车。 Finally, we don't stand on a peak of intelligence, 第三种假设是: 人类没有登上智慧的巅峰, or anywhere near it, likely. 甚至连接近可能都谈不上。 And this really is the crucial insight. 这个想法十分关键。 This is what makes our situation so precarious, 这就是为什么 我们所处的环境是很危险的, and this is what makes our intuitions about risk so unreliable. 这也是为什么我们对风险的 直觉是不可靠的。 Now, just consider the smartest person who has ever lived. 现在,请大家想一下 谁是世界上最聪明的人。 On almost everyone's shortlist here is John von Neumann. 几乎每个人的候选名单里都会 有约翰·冯·诺伊曼。 I mean, the impression that von Neumann made on the people around him, 冯·诺伊曼留给周围人的印象 and this included the greatest mathematicians and physicists of his time, 就是他是那个时代当中最杰出的 数学家和物理学家, is fairly well-documented. 这些都是完好的记录在案的。 If only half the stories about him are half true, 即使他的故事里有一半是假的, there's no question 都没有人会质疑 he's one of the smartest people who has ever lived. 他仍然是世界上最聪明的人之一。 So consider the spectrum of intelligence. 那么,让我们来看看智慧谱线吧。 Here we have John von Neumann. 现在我们有了约翰·冯·诺伊曼, And then we have you and me. 还有我们大家。 And then we have a chicken. 另外还有一只鸡。 (Laughter) (笑声) Sorry, a chicken. 抱歉,母鸡的位置应该在这。 (Laughter) (笑声) There's no reason for me to make this talk more depressing than it needs to be. 这个演讲已经够严肃了, 开个玩笑轻松一下。 (Laughter) (笑声) It seems overwhelmingly likely, however, that the spectrum of intelligence 然而,很可能的情况是, 智慧谱线上的内容 extends much further than we currently conceive, 已远远超出了我们的认知, and if we build machines that are more intelligent than we are, 如果我们建造了比 自身更聪明的机器, they will very likely explore this spectrum 它们将非常可能 以超乎寻常的方式 in ways that we can't imagine, 延展这个谱线, and exceed us in ways that we can't imagine. 最终超越人类。 And it's important to recognize that this is true by virtue of speed alone. 仅仅从速度方面考虑, 我们就能够意识到这一点。 Right? So imagine if we just built a superintelligent AI 那么,现在让我们来想象一下 我们刚建好一个超级人工智能机器, that was no smarter than your average team of researchers 大概和斯坦福 或是麻省理工学院的研究员的 at Stanford or MIT. 平均水平差不多吧。 Well, electronic circuits function about a million times faster 但是,电路板要比生物系统 than biochemical ones, 运行速度快一百万倍, so this machine should think about a million times faster 所以这个机器思考起来 会比那些打造它的大脑 than the minds that built it. 快一百万倍。 So you set it running for a week, 当你让它运行一周后, and it will perform 20,000 years of human-level intellectual work, 它将能呈现出相当于人类智慧在 20000年间发展出的水平, week after week after week. 而这个过程将周而复始。 How could we even understand, much less constrain, 那么,我们又怎么能理解, 更不用说去制约 a mind making this sort of progress? 一个以如此速度运行的机器呢? The other thing that's worrying, frankly, 坦白讲,另一件令人担心的事就是, is that, imagine the best case scenario. 我们考虑一下最理想的情景。 So imagine we hit upon a design of superintelligent AI 想象我们正好做出了 一个没有任何安全隐患的 that has no safety concerns. 超级人工智能。 We have the perfect design the first time around. 我们有了一个前所未有的完美设计。 It's as though we've been handed an oracle 就好像我们被赐予了一件神物, that behaves exactly as intended. 它能够准确的执行目标动作。 Well, this machine would be the perfect labor-saving device. 这个机器将完美的节省人力工作。 It can design the machine that can build the machine 它设计出的机器 能够再生产其他机器, that can do any physical work, 去完成所有的人力工作。 powered by sunlight, 由太阳能供电, more or less for the cost of raw materials. 而成本的多少仅取决于原材料。 So we're talking about the end of human drudgery. 那么,我们正在谈论的 就是人力劳动的终结。 We're also talking about the end of most intellectual work. 也关乎脑力劳动的终结。 So what would apes like ourselves do in this circumstance? 那在这种情况下, 像我们这样的"大猩猩"还能有什么用呢? Well, we'd be free to play Frisbee and give each other massages. 我们可以悠闲地玩飞盘, 给彼此做按摩。 Add some LSD and some questionable wardrobe choices, 服用一些迷药, 穿一些奇装异服, and the whole world could be like Burning Man. 整个世界都沉浸在狂欢节之中。 (Laughter) (笑声) Now, that might sound pretty good, 那可能听起来挺棒的, but ask yourself what would happen 不过让我们扪心自问, under our current economic and political order? 在现有的经济和政治体制下, 这意味着什么? It seems likely that we would witness 我们很可能会目睹 a level of wealth inequality and unemployment 前所未有的贫富差距 that we have never seen before. 和失业率。 Absent a willingness to immediately put this new wealth 有钱人不愿意马上把这笔新的财富 to the service of all humanity, 贡献出来服务社会, a few trillionaires could grace the covers of our business magazines 这时一些千万富翁能够优雅地 登上商业杂志的封面, while the rest of the world would be free to starve. 而剩下的人可能都在挨饿。 And what would the Russians or the Chinese do 如果听说硅谷里的公司 if they heard that some company in Silicon Valley 即将造出超级人工智能, was about to deploy a superintelligent AI? 俄国人和中国人 会采取怎样的行动呢? This machine would be capable of waging war, 那个机器将能够 以一种前所未有的能力 whether terrestrial or cyber, 去开展由领土问题和 with unprecedented power. 网络问题引发的战争。 This is a winner-take-all scenario. 这是一个胜者为王的世界。 To be six months ahead of the competition here 机器世界中的半年, is to be 500,000 years ahead, 在现实世界至少会相当于 at a minimum. 50万年。 So it seems that even mere rumors of this kind of breakthrough 所以仅仅是关于这种科技突破的传闻, could cause our species to go berserk. 就可以让我们的种族丧失理智。 Now, one of the most frightening things, 在我的观念里, in my view, at this moment, 当前最可怕的东西 are the kinds of things that AI researchers say 正是人工智能的研究人员 when they want to be reassuring. 安慰我们的那些话。 And the most common reason we're told not to worry is time. 最常见的理由就是关于时间。 This is all a long way off, don't you know. 他们会说,现在开始担心还为时尚早。 This is probably 50 or 100 years away. 这很可能是50年或者 100年之后才需要担心的事。 One researcher has said, 一个研究人员曾说过, "Worrying about AI safety “担心人工智能的安全性 is like worrying about overpopulation on Mars." 就好比担心火星上人口过多一样。” This is the Silicon Valley version 这就是硅谷版本的 of "don't worry your pretty little head about it." “不要杞人忧天。” (Laughter) (笑声) No one seems to notice 似乎没有人注意到 that referencing the time horizon 以时间作为参考系 is a total non sequitur. 是得不出合理的结论的。 If intelligence is just a matter of information processing, 如果说智慧只包括信息处理, and we continue to improve our machines, 然后我们继续改善这些机器, we will produce some form of superintelligence. 那么我们终将生产出超级智能。 And we have no idea how long it will take us 但是,我们无法预估将花费多长时间 to create the conditions to do that safely. 来创造实现这一切的安全环境。 Let me say that again. 我再重复一遍。 We have no idea how long it will take us 我们无法预估将花费多长时间 to create the conditions to do that safely. 来创造实现这一切的安全环境。 And if you haven't noticed, 50 years is not what it used to be. 你们可能没有注意过, 50年的概念已今非昔比。 This is 50 years in months. 这是用月来衡量50年的样子。 (每个点表示一个月) This is how long we've had the iPhone. 红色的点是代表苹果手机出现的时间。 This is how long "The Simpsons" has been on television. 这是《辛普森一家》(动画片) 在电视上播出以来的时间。 Fifty years is not that much time 要做好准备面对 人类历史上前所未有的挑战, to meet one of the greatest challenges our species will ever face. 50年时间并不是很长。 Once again, we seem to be failing to have an appropriate emotional response 就像我刚才说的, 我们对确定会来临的事情 to what we have every reason to believe is coming. 做出了不合理的回应。 The computer scientist Stuart Russell has a nice analogy here. 计算机科学家斯图尔特·罗素 给出了一个极好的类比。 He said, imagine that we received a message from an alien civilization, 他说,想象我们从 外太空接收到一条讯息, which read: 上面写着: "People of Earth, “地球上的人类, we will arrive on your planet in 50 years. 我们将在五十年后到达你们的星球, Get ready." 做好准备吧。” And now we're just counting down the months until the mothership lands? 于是我们就开始倒计时, 直到它们的“母舰”着陆吗? We would feel a little more urgency than we do. 在这种情况下我们会感到更紧迫。 Another reason we're told not to worry 另外一个试图安慰我们的理由是, is that these machines can't help but share our values 那些机器必须 拥有和我们一样的价值观, because they will be literally extensions of ourselves. 因为它们将会是我们自身的延伸。 They'll be grafted onto our brains, 它们将会被嫁接到我们的大脑上, and we'll essentially become their limbic systems. 我们将会成它们的边缘系统。 Now take a moment to consider 现在我们再思考一下 that the safest and only prudent path forward, 最安全的,也是唯一经慎重考虑后 recommended, 推荐的发展方向, is to implant this technology directly into our brains. 是将这项技术直接植入我们大脑。 Now, this may in fact be the safest and only prudent path forward, 这也许确实是最安全的, 也是唯一慎重的发展方向, but usually one's safety concerns about a technology 但通常在我们把它塞进脑袋之前, have to be pretty much worked out before you stick it inside your head. 会充分考虑这项技术的安全性。 (Laughter) (笑声) The deeper problem is that building superintelligent AI on its own 更深一层的问题是: 仅仅制造出超级人工智能机器 seems likely to be easier 可能要比 than building superintelligent AI 既制造超级人工智能, and having the completed neuroscience 又让其拥有能让 我们的思想和超级人工智能 that allows us to seamlessly integrate our minds with it. 无缝对接的完整的 神经科学系统要简单很多。 And given that the companies and governments doing this work 而做这些研究的公司或政府, are likely to perceive themselves as being in a race against all others, 很可能将彼此视作竞争对手, given that to win this race is to win the world, 因为赢得了比赛就意味着称霸了世界, provided you don't destroy it in the next moment, 前提是不在刚成功后就将其销毁, then it seems likely that whatever is easier to do 所以结论是:简单的选项 will get done first. 一定会被先实现。 Now, unfortunately, I don't have a solution to this problem, 但很遗憾, 除了建议更多人去思考这个问题, apart from recommending that more of us think about it. 我对此并无解决方案。 I think we need something like a Manhattan Project 我觉得在人工智能问题上, on the topic of artificial intelligence. 我们需要一个“曼哈顿计划” (二战核武器研究计划), Not to build it, because I think we'll inevitably do that, 不是用于讨论如何制造人工智能, 因为我们一定会这么做, but to understand how to avoid an arms race 而是去避免军备竞赛, and to build it in a way that is aligned with our interests. 最终以一种有利于 我们的方式去打造它。 When you're talking about superintelligent AI 当你在谈论一个可以自我改造的 that can make changes to itself, 超级人工智能时, it seems that we only have one chance to get the initial conditions right, 我们似乎只有 一次正确搭建初始系统的机会, and even then we will need to absorb 而这个正确的初始系统 the economic and political consequences of getting them right. 需要我们在经济以及政治上 做出很大的努力。 But the moment we admit 但是当我们承认 that information processing is the source of intelligence, 信息处理是智慧的源头, that some appropriate computational system is what the basis of intelligence is, 承认一些电脑系统是智能的基础, and we admit that we will improve these systems continuously, 承认我们会不断改善这些系统, and we admit that the horizon of cognition very likely far exceeds 承认我们现存的认知远没有达到极限, what we currently know, 将很可能被超越, then we have to admit 我们又必须同时承认 that we are in the process of building some sort of god. 我们在某种意义上 正在创造一个新的“上帝”。 Now would be a good time 现在正是思考人类是否 to make sure it's a god we can live with. 能与这个“上帝”和睦相处的最佳时机。 Thank you very much. 非常感谢! (Applause) (掌声)

萌ICP备20223985号