2017年6月24日 星期六

The Real Threat of Artificial Intelligence

The Real Threat of Artificial Intelligence

By KAI­FU LEE JUNE 24, 2017 BEIJING —
NY Times Sunday Review


What worries you about the coming world of artificial intelligence? Too often the answer to this question resembles the plot of a sci­fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs.

These are interesting issues to contemplate, but they are not pressing. They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self­-aware computer programs that can engage in common­sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This doesn’t mean we have nothing to worry about. On the contrary, the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better. They are only tools, not a competing form of intelligence. But they will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power. It is imperative that we turn our attention to these imminent challenges.

What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan) in the service of a specified goal (maximizing profits for the lender). Think of a spreadsheet on steroids, trained on big data. These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self­-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly­-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-­scale decimation of jobs — mostly lower­-paying jobs, but some higher­-paying ones, too. This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it.

Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.) We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work.

What is to be done? Part of the answer will involve educating or retraining people in tasks A.I. tools aren’t good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and “cross­-domain” thinking — for example, the work of a trial lawyer. But these skills are typically required by high-­paying jobs that may be hard to retrain displaced workers to do. More promising are lower­-paying jobs involving the “people skills” that A.I. lacks: social workers, bartenders, concierges — professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?

The solution to the problem of mass unemployment, I suspect, will involve “service jobs of love.” These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous — or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer­-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future. Other volunteer jobs may be higher-­paying and professional, such as compassionate medical service providers who serve as the “human interface” for A.I. programs that diagnose cancer.

In all cases, people will be able to choose to work fewer hours than they do now. Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies. As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of “service of love” voluntarism.

To fund this, tax rates will have to be high. The government will not only have to subsidize most people’s lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals. This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes.

But what about other countries? They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. It’s a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech­-recognition company iFlytek and several Chinese face-­recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies — Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent — are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones. So if most countries will not be able to tax ultra­-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users.

Such economic arrangements would reshape today’s geopolitical alliances. One way or another, we are going to have to start thinking about how to minimize the looming A.I.­ fueled gap between the haves and the have-­nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far­-ranging in their effects for any nation to isolate itself from the rest of the world.




AlphaGo的人文啟示

AlphaGo的人文啟示
2017/6/24
廖咸浩

據聞台灣最新的教育改革,有意在未來把寫程式列為「人人」必學的學科,加上最近有所謂「未來產業」的專家也大肆鼓吹寫程式的能力,一時間這項教育「改革」似乎勢在必行。然而,所謂「未來產業」到底看到了未來沒有?

其實,最近AlphaGo擊敗世界圍棋第一高手柯潔之後,已經無異宣布了一個更新的未來已經來到。過去的圍棋程式之所以無法超越人類,是因為程式設計者是以人的知識範圍及思考能力來設計,故程式的智力當然極難超越人。但現在的圍棋程式則是在一個簡單的框架下,讓程式自己透過大數據進行深度學習,因而能深入人類無法進入的領域。留日名棋手王銘琬在新書《迎接AI新時代》中指出,圍棋高手對圍棋的奧祕掌握不過6%左右,而今AlphagGo竟似輕易已潛入所剩的94%之中。基於此,AlphaGo會讓對奕過的頂尖高手覺得從程式學到了新的圍棋知識,就不足為怪。

換言之,人以其固定的思維方式,有時而窮,要有突破也需相當的嘗試錯誤,但人工智慧卻能以其超越人類理性的方式,快速尋找出新的思維、探索出新的天地。這意味著人工智慧已不再居於輔助人的角色,人反過來要向人工智慧來學習認知世界的方式。而其關鍵就是跳脫人向來引以為傲的理性思考模式,以強化柯德威爾所言的「未經思考的思考」。這種「思考」幾可謂訴諸「直覺」──即經驗累積到一個地步時的跳躍性認知突破,也就是「創新」所需的觸媒。

雖然人工智慧讓不少人(如霍金)擔憂其衝擊,但如果我們能善用,反而更能有助於我們進一步了解人真正的可貴之處。電影《人工智慧》何以開宗明義就探究「愛」的問題?正是因為人類和人工智慧的差異就在於此:不是「理性」的能力,而是情感的能力(這反而不是人工智慧的擅場)。人類的感情雖然並非時時都高尚可敬,然而卻正是這種不可預測性讓人類變得特殊。

但無論如何,人類有了人工智慧可以共事與學習之後,密集運用工具理性的工作,甚至具創發意義的程式設計工作,必會大量交由人工智慧處理,先進國家已有大量人工智慧所撰寫的各式分析報告甚至新聞。而如何更細膩貼心的處理感情面向,反而會成為人類最優先的關懷。換言之,未來人類會更注重如何有效的溝通、有情的處世、有意義的創造(比如醫生未來會變成「愛」的給予者,而非資料的儲存與判讀者)。綜上可知,未來教育重點反而應該更置於人文社會學科才是。

走筆至此,需不需要「人人」都學寫程式,應該已經不言而喻(想想看電腦從dos發展到windows的過程可知)。而近日某大學會計系的某生對歷史系的鄙夷,就可能是井蛙不知自己已落入溫水鍋中,而猶自妄誕囈語吧。


(作者為國立台灣大學外文系教授)

2017年6月17日 星期六

東西交鋒:周天瑋》人工智慧挑戰自由主義

東西交鋒:周天瑋》人工智慧挑戰自由主義
2017/6/17
周天瑋

在人類下棋輸給機器之後,這樣的對話或許已經出現了。人問機器人,「你能不能擁有像我一樣的智能?」機器人反問,「你能不能擁有像我一樣的智能?」人說,「你可以考考我。」於是機器人面不改色地問道, 「你在投票所,會不會議題弄不清楚就亂投?」人想了想,點頭說,「有這個可能性。」機器人說,「你輸了,我絕對不會。」

以上這一番對話,和以色列學者哈拉瑞所預測的未來,可以相互呼應。他出版的《人類大命運:從智人到神人》(Homo Deus)這本書,引起普遍關心。他在書中指出,未來有一天自由主義可能被人工智慧打敗。

他的推論大體是這樣的。古典自由主義主張的自由市場和民主選舉,是建立在每一個個體擁有獨特價值這樣一個基礎上。每一個人的自由選擇權,也是個體必須受到尊重的原因。但是本世紀科技的飛躍發展,可能會帶來決定性的變化,導致這個理論的根本動搖。

這可以從三方面來看,首先,由於演算系統的空前發達,絕大多數的人可能會喪失原來在經濟和軍事上所具備的利用價值,人的用處一旦減少,他的政治價值必然相應降低,個人權益便會失去著力點。(其結果也可能是現代法治崩解,哈拉瑞完全沒有提到。)

其次,在個體權益流失的同時,管理工作可能會轉交給演算系統,畢竟系統比你還能消化議題,也比你更了解你的思維和行為模式;因此,它會替你做出決定,而且你很可能還樂意接受。 (類似於前述那一番對話成為現實。)

第三,世界體系認為不可替代的個體,到那個時候將不再是一般人民群眾,而是升級版的超人組成的一群新精英(神人)。自由主義理論推崇個體價值和尊嚴,所以言論、結社、信仰和財產權,人人自由而平等(這些哈拉瑞沒有討論),然而一旦人類產生科技性的生物差異,不可跨越的階級出現,自由主義便宣告瓦解,世界將由神人和演算系統所控制。

哈拉瑞對自由主義前景的推論是否言之成理,權且不論;我們可以從反面來認識這個問題,思考自由主義的理論與實踐是否還大可改良?而且如果自由主義果然崩潰了,那會不會一定是全人類的災難?

世界上真正實行自由主義的國家不多,英美的自由政府制度在19世紀初才形成,歐洲大多數國家甚至到二戰後才展開。世界許多國家號稱民主,卻徒具形式,並不推行憲政自由。從中國歷代盛世到亞洲四小龍,人文主義起過關鍵作用。大陸目前採行中國特色的社會主義加黨國資本主義,不接受自由主義;但在理論上,一個自由主義制度如果和一神教相結合,和一個國家資本主義制度如果與人文主義相結合,對世界的影響孰優孰劣?未必前者勝出。

再說,如果自由主義體制在某些國家開始解體,還要看解體的究竟是民主還是共和。美國立憲精神原是共和,不是民主,近年總統大選投票率六成邊緣,今年洛杉磯市長選舉,投票率竟然不到12%!美國先賢如果復生,一定會設法改良民主制度,或許會將現今若干投票活動參照大數據,讓演算系統取代那既勞民傷財、撕裂社會,又暴露出人性與媒體醜陋面的民主活動。


(作者為加州大學洛杉磯分校中國研究中心理事長、律師)

數位落差與信任落差 偷襲台灣

數位落差與信任落差 偷襲台灣

2017-06-18 02:51聯合報
高希均/遠見.天下文化事業群創辦人(台北市)

面對科技劇變
二○一○年一月,紐約時報專欄作家佛里曼(Thomas L. Friedman)受邀在總統府演講,暢論能源發展的選擇及影響。他稱讚台灣:「你們沒有油井,但有腦礦,開發你們的腦力,會產生更大的生產力。」「如果有『台灣股』上市,我一定買。」

六月廿一、二日,他又要來台北演講,討論新著《謝謝你遲到了》中的三個大趨勢:市場、大自然及摩爾定律。

面對數位運用,還來不及適應手機的社群互動、行動支付時,大數據、雲端、區塊鏈、人工智能(AI)等已風起雲湧。不久前去北京,友人笑說:路邊乞丐都要你用支付寶。近日李開復回母校哥倫比亞大學及台大畢業典禮演講,告訴年輕一代:AI發展及應用,會比工業革命影響深遠;甚至預測十年內目前工作會有一半可被替代。對青壯年人來說,融入這個新世界,就能扮演新角色。

再看看數位應用的商業模式,一些難以想像變化,已融入生活中:臉書是球最受歡迎的傳媒公司,沒有自己的「內容」;阿里巴巴是最有價值的網路零售商,沒有自己的「存貨」;Airbnb是最大出租住宿空間供應者,沒有自己的「旅館」;大陸與新加坡共享單車,掃瞄QR code可隨借、隨騎、隨還。面對這種落差,佛里曼提醒:你是快速地適應?還是漫不經心地旁觀?

這位《世界是平的》的作者,當然清楚,「世界不是很平的」、「全球化不是萬靈藥」,它產生數位應用落差…終至惡化國內的所得落差,國際的貧富落差。

這些問題佛里曼都在新著「三M」—Market(市場)、Mother Nature(大自然)、Moore's Law(摩爾定律)架構中討論到。

衰退中的社會信任
月初群我倫理促進會發表二○一七台灣信任調查,調查警示:人民最不信任前五名,依次是記者、官員、民代、法官、總統。如果我們曾驕傲地宣稱:台灣是第一個華人言論自由、法治民主社會,那麼構建民主社會最重要的基石全部發生動搖:民主之根在腐蝕、言論自由在濫用、政府效率在衰退、社會執法的公平性受質疑、總統的權與責受到挑戰。

佛里曼在新書評述過「信任」的重要。「當人民『信任』彼此時,就更能調適並開放擁抱所有形式的多元主義,就會有長遠思考,人民更傾向於通力合作和實驗;對他人、新思想及新穎的方法敞開心胸;他們也不會浪費精力於調查每一個錯誤,他們不畏懼失敗。」這個看法與剛來台演講的福山教授著作《信任:社會美德與經濟繁榮》的論點,前後呼應。


佛里曼在新書的中文序中寫著:「我對台灣人民的能量與專注力印象深刻,儘管在一個小島上生活,卻能掌握全球脈動。」此刻我們渴望聽到的是他的建言。

2017年6月3日 星期六

台大畢典 李開復暢談AI魔法

台大畢典 李開復暢談AI魔法
20170604
陳宜加

台大今天舉辦105學年度畢業典禮,邀創新工場創辦人暨執行長、創新工場人工智慧工程院長李開復博士擔任貴賓演說,勉勵畢業生。他以AI作為演講主軸,強調未來10AI將掀起比工業革命更快、更大的革命,但人類與人工智慧最大的差別在於「愛」的能力,勉勵學子邁入人生下一個轉捩點時,用「心」創造有意義的新生活,追求夢想與所愛的事。

李開復說,大學期間找到人生最愛:人工智慧,也就是AI,畢業後的34年在AI科研、開發、投資方面不斷耕耘。未來10年的AI革命比工業革命規模更大,而且來得更迅速猛烈。例如AlphaGo 打敗人類最頂尖的棋手柯潔,棋聖聶衛平點評:「人類要打贏AlphaGo唯一的希望,就是拔掉電源。」

他說,如果把這些技術和成果往後延伸,可以確定預測,未來十年, AI能在任何任務導向的客觀領域超越人類、取代人類50%左右工作。AI會取代工廠的工人、建築工人、操作員、分析師、會計師,司機、助理、仲介等,甚至部分醫師、律師及老師的專業工作,人類將進入一個富足的豐產時代,因為AI作為我們的工具,將為我們創造巨大的價值,幫助我們降低甚至消除貧窮和飢餓,也有更多時間來做愛做的事。

關於人工智慧未來的三個想像圖,李開復說是「金字塔、魔法棒、愛心」。隨著AI到來,職場的金字塔結構將會重組,金字塔頂端為創新者。AI優化某一個領域的精確度,遠超人類,但是AI是不會創新的。理工科系學生機會在於創新、發明前所未有的技術,不只是為了避免被AI取代,也是責任與機遇。

至於資工系學生,李開復建議,「不要認為自己必須去面試所有半導體公司、挑最高薪的工作」,應慎重考慮去世界最頂尖學府深造或最頂尖科研公司從事科研;醫學院學生也不要只追逐高薪的醫師工作,請考慮做醫學研究工作,「因為你有機會延長人類的壽命和生活質量,這是AI做不了的。」

他說,AI財經新聞、體育新聞已經比大部分記者寫得更快、更好且不會犯錯,但肯定無法寫出比高希均教授更好的經濟評論。雖然AI已經開始寫小說,但肯定寫不出比龍應台更好的散文。雖然AI可以在炒股方面打敗絕大多數股市名師與名嘴專家,但無法取代創新工場對科技趨勢預測和早期投資眼光。

李開復強調,AI時代將是藝術及文化創作者的美好時代,會有更多的藝術家、 設計師、詩人、歌唱家、演員的出現。AI無法感性自由創造,不懂美,不懂幽默。許多台灣創作者與設計師正在文創領域中努力,也有不少在國際上有亮眼的成績,期待學習藝術、戲劇、音樂專業的畢業生運用藝術和美學、甚至文化娛樂領域創業,進一步提升台灣軟實力。

他指出,金字塔頂層的機會不是社會上每個人都能得到的。尤其在AI大量取代重複性工作的時候,被取代的人怎麼辦呢?他認為工作結構金字塔的基層,80-90%的就業機會將是人與人之間的服務業,服務、參與、聯繫、情感。這些都是AI不能做的。當未來人有了更多的時間,人們會希望能有更貼心又有人情味的服務,和真正用心做出來的產品與服務。

「服務業值錢嗎?當然值錢!」他也說,特別看好台灣的服務業。在台灣社會擔心台灣競爭力的時候,其實我認為只要走上街頭小巷,感受一下周圍的人情味,體驗一下世界頂尖的服務業,就會發現這就是台灣的核心競爭力。創新工場投資台灣創業者最成功的三個案子不是科技公司,而是服務公司,分別是:“藍領服務”,“快剪”,和“麵包”。我們幫助這三個公司引入大陸市場,都在“消費升級”的趨勢下快速發展。我認為三年內三家公司都有機會成為十億美元的獨角獸。

李開復說,觀察台灣社會有被動、推卸、無奈的氛圍。經濟沒有進步是政策的問題,薪資低是企業的問題,房價高是房地產商的問題,年輕人看不到未來是大環境的問題。他勉勵學子,有了AI魔法棒,更有責任去解決困難的問題。不要浪費時間做那些機器很快就能勝過人類的事。不要接受沒有挑戰的工作,對自己設定積極而嚴格的學習目標,選定某個具體領域勤下苦工,成為AI無法取代的人才。

李開復說,四年前被診斷得了第四期淋巴癌。當時我面臨一個冷酷的事實,我當時認為自己生命可能要用月來計算。在接受治療那段充滿不確定的日子裡,對人生反思良多,也啟發了我對於AI之於人類存在意義的一個全新觀點。的確,AI在許多分析型的工作已經明確的擊敗我們,AI勝過人類的領域只會一個個增加。但是,工作能力並不是我們之所以成為人的原因。身為人類的獨特之處,是因為我們有愛的能力。

他強調,人們跟AI最大的差別就是我們有愛,勉勵每一位同學在正要邁入人生下一章的轉捩點,用你們過人的大腦面對人生,但更重要的是,用「心」創造有意義的新生活。


(中時)