AI Index 2019 assesses global AI research, investment, and impact – VentureBeat

Leaders in the AI community came together to release the 2019 AI Index report today, an annual attempt to examine the biggest trends shaping the AI industry, breakthrough research, and AIs impact to society.

It also examines trends like AI hiring practices, private investment, AI research contributions by nation, researchers leaving academia for industry, and how much AI plays a role in specific industries. The report also notes strides in the reduction of the amount of time it takes to train AI systems and computing costs, two of the biggest hindrances to AI adoption rates.

In a year and a half, the time required to train a large image classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds in July, 2019, the report reads.

Some highlights:

The report is compiled by the Stanford Human-Centered AI Institute in collaboration with people from OpenAI. It originated in 2016 as part of AI 100, a century-long Stanford study of AIs progress and impact.

What we set out to do was to be religious about the quality and objectivity of the data, Stanford University professor emeritus and steering committee chair Yoav Shoham told VentureBeat in a phone interview.

Shoham has been on the AI Index steering committee since the beginning and acted as chair of a group that put the report together. Others include MIT economist Erik Brynjolfsson, Partnership on AI executive director Terah Lyons, and others from SRI International, Harvard University, OpenAI, and the McKinsey Global Institute.

The work is intended to help the general public to understand progress in the field and inform policymakers and business decision makers about how their country ranks compared to other nations.

Now in its third year, the report has three times more data sources than at its launch, authors told VentureBeat, and for the first time comes with a Global AI Vibrancy tool, a way to compare countries across 34 axes.

Shoham called it premature to make national AI rankings, as some previous works have done.

Its tempting to just do a ranking of countries, just measure some things, add a bunch of numbers, and say, you know U.S. is number one and China is number two, and what have you, he said. We didnt want to do that because when you do that, you distort things and theres so many dimensions you could look at. And eventually, its a good idea to have something like a ranking, but we think its way premature to do it.

The Global Vibrancy tool gives the choice to measure by overall numbers as well as per capita trends to recognize hot spots in places such as Israel, which produces more per capita deep learning research than any other country, or advanced AI leaders like Finland and Singapore.

Earlier this year a consultancy firm working with the United Nations determined roughly 30 nations currently have national AI strategies.

For example, according to Elseviers Scopus, which looks at publication rates for repositories like arXiv, Europe produces more AI research papers than any other part of the world, but Israel has the highest per capita deep learning research and the United States produces the most-cited AI research.

Corporate or industry affiliation with AI research is growing, and is most likely to occur in U.S., China, Japan, France, Germany, and the U.K.

Ten years ago, 20 years ago, all innovation happened in academia, and then industry picked up bits and pieces of it, perfected it and commercialized it. Thats no longer true. The lines are blurred and people cross over, Shoham said. I think the leading academic institutions are coming to terms that this is the new normal.

Though 60% of PhD candidates go to industry over academia today compared to 20% in 2004, academic research still outpunches government and corporate papers it makes up 92% of AI publications from China, 90% from Europe, and 85% from the U.S., according to the report.

The report also assesses progress in benchmarks and methods to track AI across disciplines like image classification and progress in methods to train AI systems for common use cases like translation or ActivityNet for event recognition in videos.

In some regards, Shoham says progress results are mixed, as some AI systems that achieve high results in a benchmark may prove to be more brittle than those results may indicate.

Shoham looks to work in conversational AI, his field of research, for an example. Some systems may perform well on a benchmark like Stanfords SQuAD question and answering test, but appear to be overfit to narrow tasks.

The thing is these are highly specialized tasks and domains, and as soon as you go out of domain, the performance drops dramatically and the committee knows it, Shoham said. Theres a lot to be excited about genuinely, including all these systems that I mentioned, but were quite far away from human level understanding of language right now. So we try to be nuanced about that in the report.

The report also cites instances of human-level performance by AI systems such as DeepMinds AlphaStar beating a human in Starcraft II and detection of diabetic retinopathy in images of eyes using deep learning.

Here is the original post:

AI Index 2019 assesses global AI research, investment, and impact - VentureBeat

Related Posts

Comments are closed.