作者:Will Knight
编译自:5 Big Predictions for Artificial Intelligence in 2017
https://www.technologyreview.com/s/603216/5-big-predictions-for-artificial-intelligence-in-2017/
2016年人工智能(AI)和机器学习领域取得重大进展,不过2017值得期待的更多。比如,以下5个方面:
深度强化学习前景喜人
阿尔法狗战胜韩国名将李世石,成为AI,尤其是深度强化学习技术领域的划时代事件。强化学习源自于动物的学习方式,它们不用指导就能发现哪些行为会带来或好或坏的结果。同样,计算机通过不断的尝试和纠错能够自行走出迷宫,不需要人为引导或示范。这一技术理念已存在数十年,但必须和大型(或深度)神经网络融合才能真正解决复杂难题(比如,围棋)。通过不断尝试并分析先前比赛,阿尔法狗自己就学会了如何像大师一样对弈。
人们希望通过强化学习技术有效解决现实生活中的难题,2017年这一技术或将用于自动驾驶和工业机器人领域。
生成式对抗网络
2016年,在西班牙巴塞罗那召开的第30届神经信息处理系统大会(NIPS 2016)上,一项称为生成式对抗网络(generative adversarial networks)的机器学习技术成为讨论的焦点。
生成式对抗网络由人工智能非营利组织Open AI成员Ian Goodfellow研发,包含可从训练集中学习并生成新数据的网络一和竭力辨别真假数据的网络二。两个网络同时运转便能生成十分贴近事实的合成数据。通过该技术可以生成视频游戏场景、处理模糊的视频片段、优化电脑设计作品。
世界顶尖机器学习专家Yoshua Bengio在NIPS会上表示,生成式对抗网络为计算机研究未标记数据提供了有力方式,这或许是未来计算机变得更加智能的关键所在,这点尤为振奋人心。
中国AI蓬勃发展
目前中国技术行业正远离仿效西方企业的老路子,目光瞄准AI和机器学习作为接下来研究创新的主要领域。
百度AI实验室成立已有些时日,在语音识别、自然语言处理、广告业务优化方面有较大收获。其他企业也都纷纷迎头赶上。腾讯AI实验室去年展开研究,该公司代表还在2016NIPS大会上积极吸引相关技术人才。滴滴也正筹建AI实验室,有报道称该公司的无人驾驶汽车技术正在研发之中。
中国投资商正向AI企业投入大量资金,中国政府也承诺到2018年共投资150亿美元,用于推动国内AI行业繁荣发展。
AI下个目标——语言学习
AI领域的下一个目标是什么?研究人员很可能回答:语言。他们希望那些推动实现语音、图片识别重大进展的技术也可以帮助计算机更高效地解析、生成语言。
这是人工智能领域长期以来的一个目标,计算机和人进行言语沟通一直让人心生向往。语言理解能力提升会让机器派上更大用场。但由于语言的复杂性、微妙差别和巨大影响力,面临的挑战也十分严峻。和智能手机进行深度有意义的对话一时半会怕是难以实现,不过目前已经取得了一些重要进展,2017年可以期待的更多。
AI炒作难再大行其道
2016年,AI领域取得了不少激动人心的重大进展,但相关炒作也达到了一个新高峰。漫天炒作造成的一个问题就是,一旦未取得突破性进展便会出现一种失望情绪,导致曾经呼声过高的企业受挫、投资也随之流失。2017年类似的大肆宣传或将受到抵制,这可能不是什么坏事。
5 Big Predictions for Artificial Intelligence in 2017
Expect to see better language understanding and an AI boom in China, among other things.
Last year was huge for advancements in artificial intelligence and machine learning. But 2017 may well deliver even more. Here are five key things to look forward to.
Positive reinforcement
AlphaGo’s historic victory against one of the best Go players of all time, Lee Sedol, was a landmark for the field of AI, and especially for the technique known as deep reinforcement learning.
Reinforcement learning takes inspiration from the ways that animals learn how certain behaviors tend to result in a positive or negative outcome. Using this approach, a computer can, say, figure out how to navigate a maze by trial and error and then associate the positive outcome—exiting the maze—with the actions that led up to it. This lets a machine learn without instruction or even explicit examples. The idea has been around for decades, but combining it with large (or deep) neural networks provides the power needed to make it work on really complex problems (like the game of Go). Through relentless experimentation, as well as analysis of previous games, AlphaGo figured out for itself how play the game at an expert level.
The hope is that reinforcement learning will now prove useful in many real-world situations. And the recent release of several simulated environments should spur progress on the necessary algorithms by increasing the range of skills computers can acquire this way.
In 2017, we are likely to see attempts to apply reinforcement learning to problems such as automated driving and industrial robotics. Google has already boasted of using deep reinforcement learning to make its data centers more efficient. But the approach remains experimental, and it still requires time-consuming simulation, so it’ll be interesting to see how effectively it can be deployed.
Dueling neural networks
At the banner AI academic gathering held recently in Barcelona, the Neural Information Processing Systems conference, much of the buzz was about a new machine-learning technique known as generative adversarial networks.
Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data. By working together, these networks can produce very realistic synthetic data. The approach could be used to generate video-game scenery, de-blur pixelated video footage, or apply stylistic changes to computer-generated designs.
Yoshua Bengio, one of the world’s leading experts on machine learning (and Goodfellow’s PhD advisor at the University of Montreal), said at NIPS that the approach is especially exciting because it offers a powerful way for computers to learn from unlabeled data—something many believe may hold the key to making computers a lot more intelligent in years to come.
China’s AI boom
This may also be the year in which China starts looking like a major player in the field of AI. The country’s tech industry is shifting away from copying Western companies, and it has identified AI and machine learning as the next big areas of innovation.
China’s leading search company, Baidu, has had an AI-focused lab for some time, and it is reaping the rewards in terms of improvements in technologies such as voice recognition and natural language processing, as well as a better-optimized advertising business. Other players are now scrambling to catch up. Tencent, which offers the hugely successful mobile-first messaging and networking app WeChat, opened an AI lab last year, and the company was busy recruiting talentat NIPS. Didi, the ride-sharing giant that bought Uber’s Chinese operations earlier this year, is also building out a lab and reportedly working on its own driverless cars.
Chinese investors are now pouring money into AI-focused startups, and the Chinese government has signaled a desire to see the country’s AI industry blossom, pledging to invest about $15 billion by 2018.
Language learning
Ask AI researchers what their next big target is, and they are likely to mention language. The hope is that techniques that have produced spectacular progress in voice and image recognition, among other areas, may also help computers parse and generate language more effectively.
This is a long-standing goal in artificial intelligence, and the prospect of computers communicating and interacting with us using language is a fascinating one. Better language understanding would make machines a whole lot more useful. But the challenge is a formidable one, given the complexity, subtlety, and power of language.
Don’t expect to get into deep and meaningful conversation with your smartphone for a while. But some impressive inroads are being made, and you can expect further advances in this area in 2017.
Backlash to the hype
As well as genuine advances and exciting new applications, 2016 saw the hype surrounding artificial intelligence reach heady new heights. While many have faith in the underlying value of technologies being developed today, it’s hard to escape the feeling that the publicity surrounding AI is getting a little out of hand.
Some AI researchers are evidently irritated. A launch party was organized during NIPS for a fake AI startup called Rocket AI, to highlight the growing mania and nonsense around real AI research. The deception wasn’t very convincing, but it was a fun way to draw attention to a genuine problem.
One real problem is that hype inevitably leads to a sense of disappointment when big breakthroughs don’t happen, causing overvalued startups to fail and investment to dry up. Perhaps 2017 will feature some sort of backlash against the AI hype machine—and maybe that wouldn’t be such a bad thing.
评论前必须登录!
注册