Researchers From the University of Toronto and LG AI Research Develop Explainable Artificial Intelligence (AI) Algorithm – MarkTechPost

A team of researchers from the University of Toronto and LG AI Research have developed an explainable artificial intelligence (XAI) algorithm. The algorithm can help identify and eliminate defects in display screens.

The algorithm outperformed comparable approaches on industry benchmarks and was developed through an ongoing AI research collaboration between LG and the University of Toronto.

According to the researchers, the XAI algorithm could be applied in other fields, primarily those which require details into how machine learning makes its decisions, including data interpretation from medical scans.

Kostas Plataniotis, a professor at the Edward S. Rogers Sr. department of electrical and computer engineering, believes that explainability and interpretability are about meeting the quality standards we set for ourselves as engineers and are demanded by the end-user.

The research team also included recent the University of Toronto Engineering graduate Mahesh Sudhakar and masters candidate Sam Sattarzadeh and researchers led by Jongseong Jang at LG AI Research Canadas global research-and-development arm.

XAI is an emerging field addressing problems with the black box approach of machine learning strategies. In a black-box model, a computer may be given a set of training data in the form of millions of labeled images. The algorithm learns to relate certain features of the input with specific outputs by analyzing the data. It can then correctly attach labels to images it has never seen before.

The machine decides itself which aspects of the image to pay attention to and which to ignore. Thus, its designers never know exactly how it arrives at a result. Therefore, a black-box model presents challenges when its applied to areas such as health care.

XAI is thus designed to be a glass box approach that makes the decision-making process transparent. Traditional Algorithms and XAI algorithms are run simultaneously to examine the validity and the level of their learning performance. The method also provides opportunities to carry out debugging and find training efficiencies.

There are two methods to develop an XAI algorithm. The first, called back-propagation, relies on the underlying AI architecture to quickly calculate how the networks prediction corresponds to its input. The second method, known as Perturbation, sacrifices some speed for accuracy. It involves changing data inputs and tracking the corresponding outputs to determine the necessary compensation.

The teams resulting XAI algorithm SISE (Semantic Input Sampling for Explanation) is detailed in a research paper introduced at the 35th AAAI Conference on Artificial Intelligence.

Source:https://techxplore.com/news/2021-04-artificial-intelligence-algorithm.html

Paper: https://arxiv.org/pdf/2010.00672.pdf

Suggested

Read more:
Researchers From the University of Toronto and LG AI Research Develop Explainable Artificial Intelligence (AI) Algorithm - MarkTechPost

Related Posts

Comments are closed.