So all of a sudden AI is everywhere. People who weren't quite sure what it was are playing with it on their phones.
突然之間,人工智慧無處不在。那些不太確定它是什麼的人們,正在手機上玩弄它。
00:05
Is that good or bad? Yeah, so I've been um thinking about AI for a long time.
這是好事還是壞事?是的,我一直在思考人工智慧很久了。
00:10
Since I was in college, really. Um, it was one of the things that the sort of four or five things I thought would really affect the future.
真的,自從我上大學以來。嗯,這是我認為會真正影響未來的四五件事之一。
00:15
Dramatically. It is fundamentally profound in that the the smartest creatures, as far as we know on this Earth, are humans.
戲劇性地。它本質上是深刻的,因為據我們所知,地球上最聰明的生物是人類。
00:23
Um, is our defining characteristic? Yes, we obviously are weaker than say chimpanzees, are less agile, um, but are smarter.
嗯,這是我們的決定性特徵嗎?是的,顯然我們比黑猩猩弱,不如牠們靈活,嗯,但我們更聰明。
00:35
So now what happens when something vastly smarter than the smartest person comes along in Silicon Valley?
那麼,當有什麼東西比矽谷最聰明的人還要聰明時,會發生什麼?
00:46
It's very difficult to predict what will happen in that circumstance. It's called The Singularity.
在那種情況下,很難預測會發生什麼。這被稱為奇點。
00:50
It's, you know, it's a singularity like a black hole because you don't know what happens after that. It's hard to predict.
你知道,這就像黑洞一樣的奇點,因為你不知道之後會發生什麼。很難預測。
00:56
So I think we should be cautious with AI and we should...
所以我認為我們應該謹慎對待人工智慧,我們應該……
01:02
I think there should be some government oversight because it affects the it's a danger to the public.
我認為應該有一些政府監管,因為它會影響到公眾,對公眾構成危險。
01:07
And so when you have things that are endangered to the public, you know, like let's say, um...
所以當有對公眾構成危險的事物時,你知道,比如說,嗯……
01:16
So food, food and drugs. That's why we have the Food and Drug Administration, right? And the Federal Aviation Administration.
像是食品、食品和藥品。這就是為什麼我們有食品藥物管理局,對吧?還有聯邦航空總署。
01:22
Uh, the FCC. Uh, we have, we have these agencies to oversee things that affect the public.
呃,FCC。呃,我們有這些機構來監督影響公眾的事物。
01:31
Where they could be public harm, and you don't want companies cutting corners on safety.
在可能造成公眾傷害的地方,你不想讓公司在安全方面偷工減料。
01:40
And then having people suffer as a result.
然後導致人們因此受苦。
01:43
So that's why I've actually for a long time been a strong advocate of AI regulation.
這就是為什麼我長期以來一直是人工智慧監管的堅定倡導者。
01:51
Um, something I think regulation is, uh, you know, it's not fun to be regulated. It's a sort of somewhat of a...
嗯,我認為監管,呃,你知道,被監管並不有趣。這有點……
02:00
I saw an audience to be regulated. I have a lot of experience with regulated industries.
我看到觀眾被監管。我在受監管的行業有豐富的經驗。
02:04
Because obviously automotive is highly regulated. You could fill this room with all the regulations that are required for a production car just in the United States.
因為顯然汽車業受到高度監管。僅在美國,生產汽車所需的所有法規就可以填滿這個房間。
02:11
And then there's a whole different set of regulations in Europe and China and the rest of the world. So...
然後歐洲、中國和世界其他地區有一整套不同的法規。所以……
02:21
Very familiar with being overseen by a lot of regulators.
對被許多監管機構監督非常熟悉。
02:24
And the same thing is true with rockets. You can't just willy-nilly, you know, shoot rockets, at least not big ones anyway, because the FAA oversees that.
Um, and then even to get a launch license, there are probably half a dozen or more federal agencies that need to approve it, plus state agencies.
嗯,即使要獲得發射許可證,也可能需要半打或更多聯邦機構批准,再加上州級機構。
02:43
So it's, I'm, I've been through so many regulatory situations, it's insane. And, but, you know, sometimes I...
所以,我經歷過太多監管情況,簡直難以置信。但是,你知道,有時候我…
02:50
I people think I'm some sort of like regulatory Maverick that sort of defies regulators on a regular basis, but this is actually not the case.
人們認為我是某種藐視監管機構的特立獨行者,但事實並非如此。
02:59
So, uh, in, you know, once in a blue moon, rarely, I will disagree with regulators, but the vast majority of the time, uh, my companies agree with the regulations and comply with them anyway.
所以,嗯,偶爾,很少,我會與監管機構意見不合,但絕大多數時候,我的公司都遵守規定並服從。
03:13
So I think, I think we should take this seriously and, and we should have...
所以我認為,我認為我們應該認真對待這件事,而且我們應該有…
03:19
Um, a regulatory agency. I think it needs to start with, um, a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rulemaking.
And then those rules, you know, uh, we'll probably hopefully gradually be accepted by the major players in in AI.
然後這些規則,你知道,嗯,我們可能會希望被人工智慧領域的主要參與者逐步接受。
03:41
And, um, and I think we'll have a better chance of, um, advanced AI being beneficial to humanity in that circumstance.
嗯,我認為在這種情況下,我們將有更大的機會讓先進的人工智慧造福人類。
03:51
But all regulations start with a perceived danger. And planes fall out of the sky or food causes botulism.
但所有監管都始於感知到的危險。例如飛機墜毀或食物導致肉毒桿菌中毒。
03:57
Yes, I don't think the average person playing with AI on his iPhone perceives any danger. Can you just roughly explain what you think the dangers might be?
Yeah, so the, the, the danger really AI is, um, perhaps, uh, more dangerous than say, mismanaged aircraft design or production maintenance.
是的,所以人工智慧的危險,嗯,可能比管理不善的飛機設計或生產維護,或者糟糕的汽車生產更危險。
04:18
Or or bad car production in the sense that it is, it has the potential, however small one may regard that probability, but it is non-trivial.
它的潛力,無論你認為這種可能性多麼微小,但它並非微不足道。
04:27
And has the potential of civilizational destruction. There's movies like Terminator, but it wouldn't quite happen like Terminator.
並且有潛力造成文明毀滅。有像《魔鬼終結者》這樣的電影,但情況不會完全像電影裡那樣。
04:35
Um, because the the intelligence would be in the data centers, right? The robots just the end effector. But I think perhaps what you may be alluding to here is that...
嗯,因為智慧將存在於數據中心,對吧?機器人只是執行器。但我認為你可能在這裡暗示的是…
04:49
Um, regulations are really only pointed to effect after something terrible has happened. That's correct.
嗯,監管通常是在發生了可怕的事情之後才開始實施的。這是正確的。
04:55
If that's the case for AI and we're only putting regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point.
You think that's real? It is. It is conceivable that AI could take control and reach a point where you couldn't turn it off and it would be making the decisions for people. Yeah, absolutely.
Absolutely. No, it's that's that's definitely the way things are headed for sure.
絕對。不,這絕對是事情的發展方向,毫無疑問。
05:25
Uh, I mean, um, things like like say, uh, ChatGBT, which is based on JPD4 from OpenAI.
嗯,我的意思是,像是,比如說,ChatGPT,它基於 OpenAI 的 GPT-4。
05:29
Which is the company that I, played a a critical role in in creating, unfortunately, back when it was a non-profit.
OpenAI 正是那家公司,我曾經在它還是非營利組織時,扮演了關鍵的角色,不幸的是。
05:40
Yes. Um...
是的。嗯…
05:42
I mean, the the reason OpenAI exists at all is that, um, Larry Page and I used to be close friends.
我的意思是,OpenAI 存在的理由,其實是因為,我和拉里·佩奇以前是好朋友。
05:48
And I would stay at his house in Palo Alto and I would talk to him later tonight about AI safety. And at least my perception was that Larry was not taking AI safety seriously enough.
He really seemed to be, um, sort of digital super intelligence, basically digital God, if you will, as soon as possible.
他似乎非常渴望,基本上就是數位超智慧,如果你願意的話,數位上帝。
06:15
Um, he wanted that. Yes, he's made many public statements over the years that the whole goal of Google is...
他想要的就是那樣。是的,他多年來發表過許多公開聲明,Google 的整個目標就是…
06:20
AGI, artificial general intelligence or artificial super intelligence, you know.
AGI,人工通用智慧,或者人工超智慧,你知道。
06:28
And I agree with him that the there's great potential for good, but there's also potential for bad.
我同意他,這確實有巨大的潛力帶來好的影響,但也有可能帶來壞的影響。
06:34
And so if if you've got some radical new technology, you want to try to take a set of actions that maximize the probability it will do good, minimize the probability it will do bad things.
Yes, it can't just be health, let her, let's just go, you know, barreling forward and, you know, hope for the best.
是的,不能只是莽撞地向前衝,然後希望一切都好。
06:50
And then at one point, I said, well, what about, you know, we're going to make sure humanity is okay here? Um, and, and...
然後有一次,我說,那人類會沒事吧?嗯,然後…
06:59
And, uh, and then he called me a speciesist. [Laughter]
然後,他叫我物種至上主義者。[笑聲]
07:07
That term. Yes. And there were witnesses. The other, I wasn't the only one there when you called me a speciesist.
說了那個詞。是的。而且有目擊者。當時我不是唯一一個在場的人,當你叫我物種至上主義者時。
07:12
And so I was like, okay, that's it. Uh, I've, yes, I'm a speciesist. Okay, you got me.
所以我就想,好吧,就是這樣。嗯,是的,我是物種至上主義者。好吧,你抓到我了。
07:20
What are you? Yeah, I'm fully auspicious. Um, busted. Um, so...
你呢?是的,我完全是樂觀的。嗯,被識破了。嗯,所以…
07:29
Um, that was the last rule at the time. Google had a quite Deep Mind, and so Google DeepMind together had about three quarters of all the AI talent in the world.
嗯,當時的規定是,Google 擁有 DeepMind,所以 Google DeepMind 總共擁有世界上大約四分之三的人工智慧人才。
07:36
They obviously had a tremendous amount of money and more computers than anyone else. So I'm like, okay, we're about unipolar world here where there's just one one company...
That it has close to Monopoly on AI talent and, uh, and computers. Like so scaled computing. And a person who's in charge doesn't seem to care about safety. This is not good.
幾乎壟斷了人工智慧人才和電腦。就像大規模運算一樣。而且負責人似乎不在乎安全。這不好。
07:56
So what's the furthest thing from Google would be like a non-profit that is fully open? Because Google was closed for-profit.
那麼,與 Google 最截然不同的會是什麼樣的非營利組織,而且是完全開放的?因為 Google 是封閉的營利公司。
08:01
So that's why the open and OpenAI refers to open source, transparency, so people know what's going on. Yes, and that it, we don't want to have like a, I mean, well, I'm normally in favor of for-profit, we don't want this to be sort of a profit maximizing demon from hell that's right.
Yeah, this makes the future good for the humans. Yes, yes, because we're humans.
是的,這讓未來對人類有益。是的,是的,因為我們是人類。
08:40
So can you just put it, I keep pressing it, but just for people who haven't thought this through and aren't familiar with it and the cool parts of artificial intelligence are so obvious.
You know, write your college paper for you, write a limerick about yourself. Like there's a lot there that's fun and useful.
你知道,幫你寫大學論文,寫一首關於你自己的打油詩。就像那裡有很多有趣且有用的東西。
09:02
Can you be more precise about what's potentially dangerous and scary? Like what could it do? What specifically are you worried about?
你能更精確地說明潛在的危險和可怕之處嗎?它會做什麼?你具體擔心什麼?
09:10
Okay, going with old sayings, the pen is mightier than the sword.
好的,引用老話說,「筆桿勝於刀劍」。
09:13
Um, so if you have a super intelligent AI that is capable of writing incredibly well and in a way that is very influential, um, you know, convincing...
And then, and as, and is constantly figuring out what is more, what is more and what is more convincing to people over time. And then enter social media.
然後,而且,並且不斷地找出什麼對人們來說越來越有說服力。然後進入社群媒體。
09:33
For example, Twitter, but also Facebook and others, you know, um, and, and potentially manipulates public opinion in a way that is very bad.
So to sum up in the words of Elon Musk, for all human history, human beings have been the smartest beings on the planet. Now, human beings have created something that is far smarter than they are.
And the consequences of that are impossible to predict. And the people who created it don't care. In fact, as he put it, Google founder Larry Page, a former friend of his...
Is looking to build a quote digital God and believes that anybody who's worried about that is a speciesist. In other words, is looking out for human beings first.
Elon Musk responded, as a human being, it's okay to look out for human beings first.
伊隆·馬斯克回應說,作為人類,優先考慮人類是可以的。
10:31
And then at the end, he said, the real problem with AI is not simply that it will jump the boundaries and become autonomous and you can't turn it off in the short term. The problem with AI is that it might control your brain through words.
And this is the application that we need to worry about now, particularly going into the next presidential election. The Democratic party, as usual, was ahead of the curve on this.
They've been thinking about how to harness AI for political power. More on that next.
他們一直在思考如何利用 AI 來獲取政治權力。更多內容請看接下來。
10:59
Subscribe to the Fox News YouTube channel to catch our nightly open stories that are changing the world and changing your life from Tucker Carlson Tonight.
訂閱 Fox News YouTube 頻道,觀看我們關於改變世界和改變你生活的每晚開放式報導,來自 Tucker Carlson Tonight。
3在视频中,马斯克提到与拉里·佩奇在AI安全问题上存在分歧,他认为佩奇更倾向于?In the video, Musk mentions disagreements with Larry Page on AI safety, believing Page is more inclined towards?在视频中,马斯克提到与拉里·佩奇在AI安全问题上存在分歧,他认为佩奇更倾向于?
In the video, Musk mentions disagreements with Larry Page on AI safety, believing Page is more inclined towards?