Can we build AI without losing control over it? - TED原文

1,321 阅读11分钟

Author: Sam Harris (Neuroscientist, philosopher)

Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Part1

I'm going to talk about a failure of intuition直觉 that many of us suffer from. It's really a failure to detect a certain kind of danger. I'm going to describe a scenario(设想) that I think is both terrifying and likely to occur, and that's not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I'm talking about is kind of cool.

I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us. 

And in fact, I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves. 

实际上,想看到人工智能最终不摧毁我们是很难的,或者它必将驱使我们自我毁灭

And yet if you're anything like me, you'll find that it's fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine( [ˈfæmɪn]饥荒), either because of climate change or some other catastrophe灾难, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn't think, "Interesting. I like this TED Talk."

Famine isn't fun. Death by science fiction( [ˈfɪkʃn]小说,虚构的事), on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I'm giving this talk.

marshal an appropriate emotional response 形成正确的认识(有正确的情绪反应)

Part2

It's as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?

The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently持续地, generation after generation. *Almost by definition, this is the worst thing that's ever happened in human history.*几乎从定义来说,这将是人类历史上发生的最惨绝人寰的事

So the only alternative可供替代的, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk承担风险 what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.

intelligence explosion: 1965年未来学家古德描述的"智能爆炸"

超智能机器就是远超所有人任何智力活动的机器。由于设计这种机器也是一种智力活动,超智能机器能够设计出更加智能的机器。毫无疑问,这样一来就会出现“智能爆炸”,人类智能将被远远地甩在后面。因此,人类只需设计一台超智能机器就足够了。

Now, this is often caricatured([ˈkærɪkətʃərd] 讽刺漫画), as I have here, as a fear that armies of malicious(恶意的) robots will attack us. But that isn't the most likely scenario. It's not that our machines will become spontaneously( [spɔnˈtenɪəslɪ] 自发地) malevolent(有恶意的). The concern is really that we will build machines that are so much more competent(有竞争力的) than we are that the slightest divergence(最轻微的差异) between their goals and our own could destroy us.

Just think about how we relate to ants. We don't hate them. We don't go out of our way to harm them. In fact, sometimes we take pains(尽力,费心) not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one, we annihilate([əˈnaɪəleɪt]消灭) them without a qualm([kwɑːm] 顾虑). The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard漠视.

Now, I suspect this seems far-fetched难以置信的 to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable(必然发生). But then you must find something wrong with one of the following assumptions. And there are only three of them.

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow狭义的 intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere(仅凭...就足以) matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains领域, because our brains have managed it(已经做到). Right? I mean, there's just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted(除非我们被干扰), we will eventually build general intelligence into our machines.

It's crucial to realize that the rate of progress doesn't matter, because any progress is enough to get us into the end zone. We don't need Moore's law(摩尔定律:集成电路上可以容纳的晶体管数目在大约每经过18个月便会增加一倍) to continue. We don't need exponential progress. We just need to keep going.

The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence -- I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately(拼命地) need to solve. We want to cure diseases like Alzheimer's(阿尔兹海默症) and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there's no brake to pull.

Finally, we don't stand on a peak山峰 of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious危险的, and this is what makes our intuitions about risk so unreliable.

Now, just consider the smartest person who has ever lived. On almost everyone's shortlist(候选名单) here is John von Neumann(约翰·冯·诺依曼,计算机之父). I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there's no question he's one of the smartest people who has ever lived. So consider the spectrum(层次) of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.

image.png

Sorry, a chicken.

image.png

It seems overwhelmingly(压倒性地) likely, however, that the spectrum of intelligence extends much further than we currently conceive设想, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can't imagine, and exceed超过 us in ways that we can't imagine.

Part3

And it's important to recognize that this is true by virtue of(凭借) speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits电路板 function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work(它就会呈现出相当于人类智慧在20,000年间发展出的水平), week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

The other thing that's worrying, frankly(坦白地说), is that, imagine the best case scenario. So imagine we hit upon(突然想出) a design of superintelligent AI that has no safety concerns(安全隐患). We have the perfect design the first time around. It's as though we've been handed an oracle(神物) that behaves exactly as intended. Well, this machine would be the perfect labor-saving(节省劳动力) device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials(成本多少取决于原材料). So we're talking about the end of human drudgery(苦工). We're also talking about the end of most intellectual work(脑力劳动).

So what would apes(猿类) like ourselves do in this circumstance? Well, we'd be free to play Frisbee飞盘 and give each other massages按摩. Add some LSD迷幻剂 and some questionable wardrobe衣柜 choices, and the whole world could be like Burning Man.

questionable wardrobe choices 奇装异服

Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent(不存在) a willingness(心甘情愿) to immediately put this new wealth to the service of all humanity, a few trillionaires(万亿富翁) could grace(为...装饰) the covers of our business magazines(登上杂志封面) while the rest of the world would be free to starve饿死.

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley硅谷 was about to deploy a superintelligent AI? This machine would be capable of waging(wage 发动) war, whether terrestrial陆地上的 or cyber网络的, with unprecedented前所未有的 power. This is a winner-take-all scenario.

To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.  计算机世界的半年,在现实世界至少会相当于50万年

So it seems that even mere rumors传闻 of this kind of breakthrough could cause our species to go berserk(丧失理智).

Part4

Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring(令人感到宽慰的). And the most common reason we're told not to worry is time. This is all a long way off(现在担心还为时尚早), don't you know. This is probably 50 or 100 years away. One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars." This is the Silicon Valley version of "don't worry your pretty little head about it(不要杞人忧天)."

No one seems to notice that referencing the time horizon is a total non sequitur( [ˈsekwɪtər]推论). If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again.We have no idea how long it will take us to create the conditions to do that safely.

And if you haven't noticed, 50 years is not what it used to be. This is 50 years in months. 

image.png

This is how long we've had the iPhone. This is how long "The Simpsons" has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

The computer scientist Stuart Russell(斯图尔特·罗素,作品:《人工智能:一种现代方法)》) has a nice analogy(类比) here. He said, imagine that we received a message from an alien civilization, which read: "People of Earth, we will arrive on your planet in 50 years. Get ready." And now we're just counting down the months until the mothership lands? We would feel a little more urgency than we do.

Another reason we're told not to worry is that these machines can't help but(不得不) share our values because they will be literally事实上 extensions of ourselves. They'll be grafted嫁接 onto our brains, and we'll essentially本质上 become their limbic systems边缘系统.

另外一个我们被告知无须担忧的原因,是说这些机器只能传递我们的价值观,因为他们根本上就是我们自身的延申。他们会嫁接到我们的大脑里,然后我们本质上就是它们的边缘系统

Now take a moment to consider that the safest and only prudent谨慎的 path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

现在请花一点点时间想想,最安全、最谨慎和最推荐的方式就是把这些技术直接植入到我们的大脑里面。这可能确实是最安全和最谨慎的发展方向,但是通常在把这些东西植入大脑之前,需要慎重考虑这项技术的安全性问题

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly(无缝地) integrate(使合并) our minds with it. And given that the companies and governments doing this work are likely to perceive(意识到) themselves as being in a race against all others, given that to win this race is to win the world, provided(假如) you don't destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

仅仅创建超级AI,比既创建AI同时让它们能和人类大脑无缝结合来得更简单,而做这些研究的的公司和政府,都是这个比赛里面的选手,因为谁赢了比赛,就赢得了全世界,当然前提是你不是发明出来就马上毁灭它,所以结论就是,什么简单就肯定会先被先实现

Now, unfortunately, I don't have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project(曼哈顿计划,美国陆军部于1942年6月开始实施利用核裂变反应来研制原子弹的计划) on the topic of artificial intelligence. Not to build it, because I think we'll inevitably do that, but to understand how to avoid an arms race(军备竞赛) and to build it in a way that is aligned with our interests. When you're talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

我们需要承担正确处理这些问题的经济和政治后果(付出很大的代价)

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition( [kɑːɡˈnɪʃn]认知) very likely far exceeds(超过) what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it's a god we can live with.