2023年3月24日 星期五

你可以吃藍色藥丸或紅色藥丸,我們沒 有藍色藥丸了

你可以吃藍色藥丸或紅色藥丸,我們沒 有藍色藥丸了

2023 年 3 月 24 日 經過尤瓦爾·哈拉里 特里斯坦哈里斯和阿扎拉斯金
Harari 先生是一位歷史學家,也是具有社會影響力的公司 Sapienship 的創始人。Harris 先生和 Raskin 先生是人道技術中心的創 始人。

想像一下,當你登上一架飛機時,一半製造飛機的工程師告訴你,飛機有 10% 的機率墜 毀,你和機上的其他人都會遇難。你還會登機嗎?



2022 年,一項關於未來 AI 風險的調查詢問了領先人工智能公司背後的 700 多位頂尖學者 和研究人員。一半的受訪者表示,未來的人工智能係統有 10% 或更高的可能性導致人類滅絕(或類似的永久和嚴重的剝奪權力)。



構建當今大型語言模型的技術公司陷入了一場將全人類放在那個平面上的競賽中。 製藥公司必須首先對其產品進行嚴格的安全檢查,才能向人們出售新藥。生物技術實驗室 不能為了用他們的魔法給股東留下深刻印象而將新病毒釋放到公共領域。同樣,具有 GPT4 及更高功能的人工智能係統不應以快於文化可以安全吸收它們的速度與數十億人的生活糾纏在一起。主導市場的競賽不應設定部署人類最重要技術的速度。我們應該以能夠使我們 做到這一點的任何速度前進。



自 20 世紀中葉以來,人工智能的幽靈一直困擾著人類,但直到最近它仍然是一個遙遠的前景,它更屬於科幻小說而不是嚴肅的科學和政治辯論。我們人類的頭腦很難掌握GPT-4 和 類似工具的新能力,更難掌握這些工具正在以指數級的速度發展出更高級、更強大的能力。但大多數關鍵技能都歸結為一件事:操縱和生成語言的能力,無論是文字、聲音還是圖像。



一開始就是這個詞。語言是人類文化的操作系統。從語言中出現了神話和法律、神和金錢、藝術和科學、友誼和國家以及計算機代碼。人工智能對語言的新掌握意味著它現在可以破解和操縱文明的操作系統。通過掌握語言,人工智能正在掌握從銀行金庫到聖墓的文明萬能鑰匙。 如果人類生活在一個很大比例的故事、旋律、圖像、法律、政策和工具由非人類智能塑造 的世界中,這意味著什麼,非人類智能知道如何以超人的效率利用人類的弱點、偏見和成癮頭腦——同時知道如何與人類建立親密關係?在像棋這樣的遊戲中,沒有人能指望打敗計算機。



當同樣的事情發生在藝術、政治或宗教領域時會發生什麼? 人工智能可以迅速吞噬整個人類文化——我們幾千年來生產的一切——消化它並開始湧出大量新的文化製品。不僅是學校論文,還有政治演講、意識形態宣言、新邪教的聖書。到 2028 年,美國總統競選可能不再由人類主持。



人類通常無法直接接觸現實。我們被文化所包圍,通過文化棱鏡體驗現實。我們的政治觀點是由記者的報導和朋友的軼事塑造的。我們的性偏好受到藝術和宗教的影響。迄今為止,這種文化繭一直由其他人類編織而成。通過非人類智能產生的棱鏡來體驗現實會是什麼感覺?



幾千年來,我們人類一直生活在其他人的夢想中。我們崇拜神靈,追求美的理想,並將我們的生命獻給源於某些先知、詩人或政治家想像的事業。很快我們也會發現自己生活在非人類智能的幻覺中。



“終結者”系列描繪了機器人在街上奔跑並射殺人。《黑客帝國》假設要完全控制人類社 會,人工智能必須首先獲得對我們大腦的物理控制,並將它們直接連接到計算機網絡。然而,僅通過掌握語言,人工智能就可以將我們包含在類似矩陣的幻想世界中,而無需射擊任何人或在我們的大腦中植入任何芯片。如果需要開槍,人工智能可以讓人類扣動扳機, 只需告訴我們正確的故事。



被困在幻想世界的幽靈比人工智能的幽靈困擾人類的時間要長得多。很快,我們將最終與笛卡爾的惡魔、柏拉圖的洞穴和佛教瑪雅人面對面。幻想的帷幕可能籠罩整個人類,我們 可能再也無法撕開那張帷幕——甚至無法意識到它的存在。



社交媒體是人工智能與人類的第一次接觸,人類迷失了。第一次接觸讓我們嚐到了未來的苦澀滋味。在社交媒體中,原始人工智能不是用來創建內容,而是用來管理用戶生成的內容。我們的新聞推送背後的人工智能仍在選擇哪些文字、聲音和圖像到達我們的視網膜和耳膜,其基礎是選擇那些將獲得最多病毒傳播、最多反應和最多參與的內容。



雖然非常原始,但社交媒體背後的人工智能足以製造幻覺的帷幕,加劇社會兩極分化,破壞我們的心理健康並瓦解民主。數以百萬計的人將這些幻想與現實混淆了。美國擁有歷史上最好的信息技術,但美國公民無法就誰贏得選舉達成一致。



雖然現在每個人都意識到社交媒體的缺點,但它並沒有得到解決,因為我們太多的社會、經濟和政治機構已經與它糾纏在一起。 大型語言模型是我們與 AI 的第二次接觸,我們不能再失去了。



但是,我們應該在什麼基礎 上相信人類能夠將這些新形式的人工智能與我們的利益結合起來呢?如果我們繼續一切照舊,新的人工智能能力將再次被用來獲取利潤和權力,即使它會在不經意間摧毀我們社會的基礎。



人工智能確實有潛力幫助我們戰勝癌症、發現救生藥物以及為我們的氣候和能源危機發明 解決方案。還有無數我們無法想像的其他好處。但是,如果地基倒塌,AI 的利益摩天大樓 組裝得有多高都無關緊要。



考慮人工智能的時間是在我們的政治、經濟和日常生活變得依賴它之前。民主是一種對話,對話依賴於語言,當語言本身被黑客入侵時,對話就破裂了,民主就站不住腳了。如果我們等到混亂接踵而至,再補救就來不及了。 但有一個問題可能會縈繞在我們的腦海中:如果我們不盡快走,西方會不會有輸給中國的風險?不,不受控制的人工智能在社會中的部署和糾纏,釋放出與責任脫鉤的神一般的 量,可能正是西方輸給中國的原因。



我們仍然可以通過 AI 選擇我們想要的未來當神一般的力量與相應的責任和控制相匹配時, 我們就可以實現 AI 所承諾的好處。 我們召喚了外星智慧。我們對它知之甚少,只知道它非常強大,為我們提供了令人眼花繚亂的禮物,但也可能破壞我們文明的基礎。我們呼籲世界各國領導人以它所帶來的挑戰來應對這一時刻。第一步是爭取時間為人工智能世界升級我們 19 世紀的機構,並在人工智能掌握我們之前學會掌握它。



YouCan Havethe Blue Pillorthe Red Pill,andWe ’ re Outof Blue Pills

March 24, 2023 By Yuval Harari, Tristan Harris and Aza Raskin

Mr. Harari is a historian and a founder of the social impact company Sapienship. Mr. Harris and Mr. Raskin are founders of the Center for Humane Technology.



Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board? In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane. Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity’s most consequential technology. We should move at whatever speed enables us to get this right. The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox. https://www.nytimes.com/2023/03/24/opinion/yuval-harari-aichatgpt.html 2023/3/25 上午10:07 Opinion | Yuval Harari on Threats to Humanity Posed by A.I. - The New York Times https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html 2/4 capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images. In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers. What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion? A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans. Humans often don’t have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence? For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence. The “Terminator” franchise depicted robots running in the streets and shooting people. “The Matrix” assumed that to gain total control of human society, A.I. would have to first gain physical control of our brains and hook them directly to a computer network. However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story. The specter of being trapped in a world of illusions has haunted humankind much longer than the specter of A.I. Soon we will finally come face to face with Descartes’s demon, with Plato’s cave, with the Buddhist Maya. A curtain of illusions could descend over the 2023/3/25 上午10:07 Opinion | Yuval Harari on Threats to Humanity Posed by A.I. - The New York Times https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html 3/4 whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there. Social media was the first contact between A.I. and humanity, and humanity lost. First contact has given us the bitter taste of things to come. In social media, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds is still choosing which words, sounds and images reach our retinas and eardrums, based on selecting those that will get the most virality, the most reaction and the most engagement. While very primitive, the A.I. behind social media was sufficient to create a curtain of illusions that increased societal polarization, undermined our mental health and unraveled democracy. Millions of people have confused these illusions with reality. The United States has the best information technology in history, yet U.S. citizens can no longer agree on who won elections. Though everyone is by now aware of the downside of social media, it hasn’t been addressed because too many of our social, economic and political institutions have become entangled with it. Large language models are our second contact with A.I. We cannot afford to lose again. But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit? If we continue with business as usual, the new A.I. capacities will again be used to gain profit and power, even if it inadvertently destroys the foundations of our society. A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine. But it doesn’t matter how high the skyscraper of benefits A.I. assembles if the foundation collapses. The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it. But there’s a question that may linger in our minds: If we don’t go as fast as possible, won’t the West risk losing to China? No. The deployment and entanglement of uncontrolled A.I. into society, unleashing godlike powers decoupled from responsibility, could be the very reason the West loses to China. We can still choose which future we want with A.I. When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises. 2023/3/25 上午10:07 Opinion | Yuval Harari on Threats to Humanity Posed by A.I. - The New York Times https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html 4/4 We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for an A.I. world and to learn to master A.I. before it masters us. Yuval Noah Harari is a historian; the author of “Sapiens,”“Homo Deus” and “Unstoppable Us”; and a founder of the social impact company Sapienship. Tristan Harris and Aza Raskin are founders of the Center for Humane Technology and co-hosts of the podcast “Your Undivided Attention.” The Times is committed to publishing a diversity of letters to the editor. We’d like to hear w



沒有留言:

張貼留言