谷歌 AI 負責人談2020 年機器學習趨勢:多任務和多模態會有大突破

在上週加拿大溫哥華舉行的NeurIPS會議上,機器學習成爲了中心議題。安全

來自世界範圍內約1.3萬名研究人員集中探討了神經科學、如何解釋神經網絡輸出以及人工智能如何幫助解決現實世界中的重大問題等焦點話題。網絡

會議期間,谷歌 AI 負責人Jeff Dean接受了媒體VentureBeat的專訪,並暢談了其對於2020年機器學習趨勢的相關見解,Jeff Dean認爲:架構

2020年,機器學習領域在多任務學習和多模態學習上將會有大突破,同時新出現的設備也會讓機器學習模型的做用更好地發揮出來。app

如下截取了部分採訪的英文原文,並簡要進行了翻譯:機器學習

1.談AI芯片ide

VentureBeat:What do you think are some of the things that in a post-Moore’s Lawworld people are going to have to keep in mind?工具

您認爲在後摩爾定律世界中,人們須要牢記哪些事情?佈局

Jeff Dean:Well I think one thing that’s been shown to be pretty effective is specialization of chips to do certain kinds of computation that you want to do that are not completely general purpose, like a general-purpose CPU. So we’ve seen a lot of benefit from more restricted computational models, like GPUs or even TPUs, which are more restricted but really designed around what ML computations need to do. And that actually gets you a fair amount of performance advantage, relative to general-purpose CPUs. And so you’re then not getting the great increases we used to get in sort of the general fabrication process improving your year-over-year substantially. But we are getting significant architectural advantages by specialization.post

我認爲用專門的芯片而不是用CPU來作非通用的計算,已經被證實很是有效。TPU或者GPU,雖然有諸多限制,但它們是圍繞着機器學習計算的須要而設計的,這比通用GPU有更大的性能優點。性能

所以,咱們很難看到過去那種算力的大幅度增加,但咱們正在經過專業化,來得到更大的架構優點。

2.談機器學習

VentureBeat:You also got a little into the use of machine learning for the creation of machine learning hardware. Can you talk more about that?

對機器學習在建立機器學習硬件方面的應用,您能詳細說說嗎?

Jeff Dean:Basically, right now in the design process you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over. It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing.

So it turns out that we have early evidence in some of our work that we can use machine learning to do much more automated placement and routing. And we can essentially have a machine learning model that learns to play the game of ASIC placement for a particular chip.

基本上,如今在設計過程當中,一些工具能夠幫助佈局,但也須要人工佈局和佈線專家,從而可使用這些設計工具進行屢次重複的迭代。

從你想要的設計開始,到佈局在芯片上,並在面積、功率和導線長度方面有適當的限制,同時還要知足全部設計角色或正在執行的任何製造過程,這一般須要花費數週的時間。

因此事實證實,在一些工做中,咱們可使用機器學習來作更多的自動佈局和佈線。

咱們基本上能夠有一個機器學習模型,去針對特定芯片玩ASIC放置的遊戲。咱們內部一直在試驗的一些芯片上,這也取得了不錯的結果。

3.談谷歌挑戰

VentureBeat: What do you feel are some of the technical or ethical challenges for Google in the year ahead?

您認爲谷歌在將來一年面臨哪些技術或倫理上的挑戰?

Jeff Dean:In terms of AI or ML, we’ve done a pretty reasonable job of getting a process in place by which we look at how we’re using machine learning in different product applications and areas consistent with the AI principles. That process has gotten better-tuned and oiled with things like model cards and things like that. I’m really happy to see those kinds of things. So I think those are good and emblematic of what we should be doing as a community.

And then I think in the areas of many of the principles, there [are] real open research directions. Like, we have kind of the best known practices for helping with fairness and bias and machine learning models or safety or privacy. But those are by no means solved problems, so we need to continue to do longer-term research in these areas to progress the state of the art while we currently apply the best known state-of-the-art techniques to what we do in an applied setting.

就AI或機器學習而言,咱們已經完成了一個至關合理的工做,並創建了一個流程。經過該流程,咱們能夠了解如何在與AI原理一致的不一樣產品應用和領域中使用機器學習。該過程已經獲得了更好的調整,並經過模型卡之類的東西進行了優化。

而後,我認爲在許多原則領域中,存在真正的開放研究方向,能夠幫助咱們解決公平和偏見以及機器學習模型或安全性或隱私問題。可是,咱們須要繼續在這些領域中進行長期研究,以提升技術水平,並將最著名的最新技術應用於咱們的工做中。

4.談人工智能趨勢

VentureBeat: What are some of the trends you expect to emerge, or milestones you think may be surpassed in 2020 in AI?

您認爲在2020年人工智能領域會出現哪些趨勢或里程碑?

Jeff Dean:I think we’ll see much more multitask learning and multimodal learning, of sort of larger scales than has been previously tackled. I think that’ll be pretty interesting.

And I think there’s going to be a continued trend to getting more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.

I think obviously AI-related principles-related work is going to be important. We’re a big enough research organization that we actually have lots of different thrusts we’re doing, so it’s hard to call out just one. But I think in general [we’ll be] progressing the state of the art, doing basic fundamental research to advance our capabilities in lots of important areas we’re looking at, like NLP or language models or vision or multimodal things. But also then collaborating with our colleagues and product teams to get some of the research that is ready for product application to allow them to build interesting features and products. And [we’ll be] doing kind of new things that Google doesn’t currently have products in but are sort of interesting applications of ML, like the chip design work we’ve been doing.

我認爲,在多任務學習和多模態學習方面會有突破,解決更多的問題。我以爲那會頗有趣。

並且我認爲,將會有愈來愈有效的設備(手機或其餘類型的設備)出現,來讓模型更有效地發揮做用。

我認爲與AI相關的原理工做顯然很重要。但對於谷歌來講,咱們是一個足夠大的研究機構,實際上咱們正在作許多不一樣的工做,所以很難一一列舉。

但總的來講,咱們將進一步發展最早進的技術,進行基礎研究,以提升咱們在許多重要領域的能力,好比NLP、語言模型或多模態的東西。

同時,咱們也會與咱們的同事和產品團隊合做,爲產品應用作一些研究,使他們可以構建有趣的功能和產品。

英文采訪原文連接:

https://venturebeat.com/2019/12/13/google-ai-chief-jeff-dean-interview-machine-learning-trends-in-2020/

相關文章
相關標籤/搜索