Explainable AI Is the Future of AI: Here Is Why – CMSWire

PHOTO:Adobe

Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.

Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.

There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.

Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:

This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.

When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.

Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.

Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.

Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."

Related Article:What Is Explainable AI (XAI)?

Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.

A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.

Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.

Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.

Related Article:Ethics and Transparency: How We Can Reach Trusted AI

Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:

Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.

There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.

As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.

The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.

Related Article: Make Responsible AI Part of Your Company's DNA

Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.

Read the rest here:
Explainable AI Is the Future of AI: Here Is Why - CMSWire

Related Posts

Comments are closed.