2021 technology trend review, part two: AI, knowledge graphs, and the COVID-19 effect – ZDNet

Last year, we identified blockchain, cloud, open-source, artificial intelligence, and knowledge graphs as the five key technological drivers for the 2020s. Although we did not anticipate the kind of year that 2020 would turn out to be, it looks like our predictions may not have been entirely off track.

Also: 2021 technology trend review, part one: Blockchain, cloud, open source

Let's pick up from where we left off, retracing developments in key technologies for the 2020s: Artificial intelligence and knowledge graphs, plus an honorable mention to COVID-19-related technological developments.

In our opener for the 2020s, we laid the groundwork to evaluate the array of technologies under the umbrella term "artificial intelligence." Now we'll use it to refer to some key developments in this area, starting with hardware.

The key thing to keep in mind here is that the proliferation of machine learning workloads has boosted the use of GPUs, previously used mostly for gaming, while also giving birth to a whole new range of manufacturers. Nvidia, which has come to dominate the AI chip market, had a very productive year.

First, by unveiling its new Ampere architecture in May, Nvidia claims this brought an improvement of about 20 times compared to Volt, its previous architecture. Then, in September, Nvidia announced the acquisition of Arm, another chip manufacturer. As we noted then, Nvidia's acquisition of Arm strengthens its ecosystem and brings economies of scale to the cloud and expansion to the edge.

As others noted, however, the acquisition may face regulatory scrutiny. The AI chip area deserves more analysis, on which we'll embark soon. However, some honorable mentions are due: To Graphcore, for having raised more capitaland seen chips deployed in the cloud and on-premise; Cerebras, for having unveiled its second-generation wafer-scale AI chip; and Blaize, for having released new hardware and software products.

The software side of things was equally eventful, if not more. As noted in the State of AI report for 2020, MLops was a major theme. MLOps, short for machine learning operations, is the equivalent of DevOps for ML models: Taking them from development to production, and managing their lifecycle in terms of improvements, fixes, redeployments, and so on.

AI has made progress in 2020, by means of opening up to a more well-rounded approach. But it has also seen setbacks.

Some of the more popular and fastest-growing Github projects in 2020 are related to MLOps. Streamlit, helping deploying applications based on machine learning models, and Dask, boosting Python's performance and operationalized by Saturn Cloud, are just two of many examples. Explainable AI, the ability to shed light on decisions made by ML models, may not be equally operationalized but is also gaining traction.

Another key theme was the use of machine learning in biology and healthcare. AlphaFold, DeepMind's system that succeeded in solving one of the most difficult computing challenges in the world, predicting how protein molecules will fold, is a prime example. More examples of AI having an impact in biology and healthcare are either here already or on the way.

But what we think should top the list is not a technical achievement. It is what's come to be known as AI ethics, i.e. the side-effects of using AI. In a highly debated development, Google recently "resignated" Timnit Gebru, a widely respected leader in AI ethics research and former co-lead of Google's ethical AI team.

Gebru was essentially "resignated" for uncovering uncomfortable truths. In addition to bias and discrimination, which Gebru posits is not just a side-effect of datasets mirroring bias in the real world, there is another aspect of what her work shows that deserves highlighting. The dire environmental consequences that the focus on ever bigger and more resource-hungry AI models has. DeepMind's dismissal of the issue in favor of AGI speaks volumes on the industry's priorities.

We did say "bigger and more resource-hungry AI models," and this bill fits perfectly another one of 2020's defining moments for AI: Language models. Besides costing millions to train, these models also have another issue: They don't know what they are talking about, which becomes clear if scrutinized. But if this is the state of the art in AI, is there a way to improve upon it? Opinions vary.

Yoshua Bengio, Yann LeCunn, and Geoffrey Hinton are considered the forefathers of deep learning. Some people subscribe to Hinton's view, that eventually all issues will be solved, and deep learning will be able to do everything. Others, like Gary Marcus, believe that AI, in the way it is currently conflated with deep learning, will never amount to much more than sophisticated pattern recognition.

Marcus, who has been consistent in his critique of deep learning, and language models based on it, is perhaps the most prominent among the ranks of scientists and practitioners who challenge today's conventional wisdom on AI. In a high profile encounter in December 2019, Marcus and Bengio debated the merits and shortcomings of deep learning and symbolic AI.

This may well have served as a watershed moment since a number of developments have elapsed since that seem to point to cross-pollination between the data-driven world of deep learning and the knowledge-driven world of symbolic AI. Marcus published a roadmap toward a merger of the two worlds, what he calls robust AI, in early 2020.

With 2020 having been what it was, this work may not have gotten the acclaim it would normally have, but it was not a shot in the dark either. Marcus elaborated on this work, as well as background and implications, in an in-depth conversation we hosted here on ZDNet. Marcus' line of thought is not singular either -- similar ideas also go by the name of Neurosymbolic AI.

A hybrid model for AI, combining machine learning and a knowledge-based approach, is gaining traction. Image: A. Ananthaswamy / Knowable Magazine

Bengio on his part published work on topics such as exploiting syntactic structure for better language modeling, factorizing declarative and procedural knowledge in dynamical systems, or even learning logic rules for reasoning on knowledge graphs in 2020. This seems like a tangible recognition of a shift toward embedding knowledge and reasoning in deep learning.

Marcus himself identified the important role knowledge graphs can play in bridging the two worlds. Knowledge graphs are arguably the best widely available and understood technology we have today for knowledge representation and reasoning, except language. Besides reaching peak hype in Gartner's hype cycle for AI in 2020, knowledge graphs are increasingly being adopted in real-world applications from industry leaders to mid-market companies.

But there is another use of graphs that has blossomed in 2020: Graph machine learning. Graph neural networks operate on the graph structures, as opposed to other types of neural networks that operate on vectors. What this means in practice is that they can leverage additional information.

Graph machine learning also goes by the name of geometrical machine learning, because of its ability to learn from complex data like graphs and multi-dimensional points. Its applications in 2020 have been pertinent in biochemistry, drug design, and structural biology. Knowledge graphs and graph machine learning can work in tandem, too.

Last year was undoubtedly characterized by the advent of COVID-19. While COVID-19 may have catalyzed digital transformation, remote work, applications in biology, healthcare, artificial intelligence, and research, not all of its side-effects were positive.

COVID-19 has also catalyzed technological applications such as thermal scanners, face recognition, immunity passports, and contact tracing, the use of which often comes with strings attached. While this is not unlike other technologies, what makes COVID-19-related technological applications stand out is their pervasiveness and the speed at which they have been deployed.

Like all other technological drivers, COVID-19 has been a mixed bag for technological progress and adoption. The speed of adoption of related technologies, however, means that society at large is lagging in terms of an informed debate and full comprehension of the implications. Let's hope that 2021 can bring more inclusion and transparency to the table.

Follow this link:
2021 technology trend review, part two: AI, knowledge graphs, and the COVID-19 effect - ZDNet

Related Posts

Comments are closed.