Machine learning and artificial intelligence are helping automate an ever-increasing array of tasks, with ever-increasing accuracy. They are supported by the growing volume of data used to feed them, and the growing sophistication in algorithms.
The flip side of more complex algorithms, however, is less interpretability. In many cases, the ability to retrace and explain outcomes reached by machine learning models (ML) is crucial, as:
"Trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things. Algorithmic trust helps to ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge, differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI."
The above quote is taken from Gartner's newly released 2020 Hype Cycle for Emerging Technologies. In it, explainable AI is placed at the peak of inflated expectations. In other words, we have reached peak hype for explainable AI. To put that into perspective, a recap may be useful.
As experts such as Gary Marcus point out, AI is probably not what you think it is. Many people today conflate AI with machine learning. While machine learning has made strides in recent years, it's not the only type of AI we have. Rule-based, symbolic AI has been around for years, and it has always been explainable.
Incidentally, that kind of AI, in the form of "Ontologies and Graphs" is also included in the same Gartner Hype Cycle, albeit in a different phase -- the trough of disillusionment. Incidentally, again, that's conflating.Ontologies are part of AI, while graphs, not necessarily.
That said: If you are interested in getting a better understanding of the state of the art in explainable AI machine learning, reading Christoph Molnar's book is a good place to start. Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. Molnar has written the bookInterpretable Machine Learning: A Guide for Making Black Box Models Explainable, in which he elaborates on the issue and examines methods for achieving explainability.
Gartner's Hype Cycle for Emerging Technologies, 2020. Explainable AI, meaning interpretable machine learning, is at the peak of inflated expectations. Ontologies, a part of symbolic AI which is explainable, is in the trough of disillusionment
Recently, Molnar and a group of researchers attempted to addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research. Their work was published as a research paper, titledPitfalls to Avoid when Interpreting Machine Learning Models, by the ICML 2020 Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
Similar to Molnar's book, the paper is thorough. Admittedly, however, it's also more involved. Yet, Molnar has striven to make it more approachable by means of visualization, using what he dubs "poorly drawn comics" to highlight each pitfall. As with Molnar's book on interpretable machine learning, we summarize findings here, while encouraging readers to dive in for themselves.
The paper mainly focuses on the pitfalls of global interpretation techniques when the full functional relationship underlying the data is to be analyzed. Discussion of "local" interpretation methods, where individual predictions are to be explained, is out of scope. For a reference on global vs. local interpretations, you can refer to Molnar's book as previously covered on ZDNet.
Authors note that ML models usually contain non-linear effects and higher-order interactions. As interpretations are based on simplifying assumptions, the associated conclusions are only valid if we have checked that the assumptions underlying our simplifications are not substantially violated.
In classical statistics this process is called "model diagnostics," and the research claims that a similar process is necessary for interpretable ML (IML) based techniques. The research identifies and describes pitfalls to avoid when interpreting ML models, reviews (partial) solutions for practitioners, and discusses open issues that require further research.
Under- or overfitting models will result in misleading interpretations regarding true feature effects and importance scores, as the model does not match the underlying data generating process well. Evaluation of training data should not be used for ML models due to the danger of overfitting. We have to resort to out-of-sample validation such as cross-validation procedures.
Formally, IML methods are designed to interpret the model instead of drawing inferences about the data generating process. In practice, however, the latter is the goal of the analysis, not the former. If a model approximates the data generating process well enough, its interpretation should reveal insights into the underlying process. Interpretations can only be as good as their underlying models. It is crucial to properly evaluate models using training and test splits -- ideally using a resampling scheme.
Flexible models should be part of the model selection process so that the true data-generating function is more likely to be discovered. This is important, as the Bayes error for most practical situations is unknown, and we cannot make absolute statements about whether a model already fits the data optimally.
Using opaque, complex ML models when an interpretable model would have been sufficient (i.e., having similar performance) is considered a common mistake. Starting with simple, interpretable models and gradually increasing complexity in a controlled, step-wise manner, where predictive performance is carefully measured and compared is recommended.
Measures of model complexity allow us to quantify the trade-off between complexity and performance and to automatically optimize for multiple objectives beyond performance. Some steps toward quantifying model complexity have been made. However, further research is required as there is no single perfect definition of interpretability but rather multiple, depending on the context.
This pitfall is further analyzed in three sub-categories: Interpretation with extrapolation, confusing correlation with dependence, and misunderstanding conditional interpretation.
Interpretation with Extrapolation refers to producing artificial data points that are used for model predictions with perturbations. These are aggregated to produce global interpretations. But if features are dependent, perturbation approaches produce unrealistic data points. In addition, even if features are independent, using an equidistant grid can produce unrealistic values for the feature of interest. Both issues can result in misleading interpretations.
Before applying interpretation methods, practitioners should check for dependencies between features in the data (e.g., via descriptive statistics or measures of dependence). When it is unavoidable to include dependent features in the model, which is usually the case in ML scenarios, additional information regarding the strength and shape of the dependence structure should be provided.
Confusing correlation with dependence is a typical error. The Pearson correlation coefficient (PCC) is a measure used to track dependency among ML features. But features with PCC close to zero can still be dependent and cause misleading model interpretations. While independence between two features implies that the PCC is zero, the converse is generally false.
Any type of dependence between features can have a strong impact on the interpretation of the results of IML methods. Thus, knowledge about (possibly non-linear) dependencies between features is crucial. Low-dimensional data can be visualized to detect dependence. For high-dimensional data, several other measures of dependence in addition to PCC can be used.
Misunderstanding conditional interpretation. Conditional variants to estimate feature effects and importance scores require a different interpretation. While conditional variants for feature effects avoid model extrapolations, these methods answer a different question. Interpretation methods that perturb features independently of others also yield an unconditional interpretation.
Conditional variants do not replace values independently of other features, but in such a way that they conform to the conditional distribution. This changes the interpretation as the effects of all dependent features become entangled. The safest option would be to remove dependent features, but this is usually infeasible in practice.
When features are highly dependent and conditional effects and importance scores are used, the practitioner has to be aware of the distinct interpretation. Currently, no approach allows us to simultaneously avoid model extrapolations and to allow a conditional interpretation of effects and importance scores for dependent features.
Global interpretation methods can produce misleading interpretations when features interact. Many interpretation methods cannot separate interactions from main effects. Most methods that identify and visualize interactions are not able to identify higher-order interactions and interactions of dependent features.
There are some methods to deal with this, but further research is still warranted. Furthermore, solutions lack in automatic detection and ranking of all interactions of a model as well as specifying the type of modeled interaction.
Due to the variance in the estimation process, interpretations of ML models can become misleading. When sampling techniques are used to approximate expected values, estimates vary, depending on the data used for the estimation. Furthermore, the obtained ML model is also a random variable, as it is generated on randomly sampled data and the inducing algorithm might contain stochastic components as well.
Hence, themodel variance has to be taken into account. The true effect of a feature may be flat, but purely by chance, especially on smaller data, an effect might algorithmically be detected. This effect could cancel out once averaged over multiple model fits. The researchers note the uncertainty in feature effect methods has not been studied in detail.
It's a steep fall to the peak of inflated expectations to the trough of disillusionment. Getting things done for interpretable machine learning takes expertise and concerted effort.
Simultaneously testing the importance of multiple features will result in false-positive interpretations if the multiple comparisons problem (MCP) is ignored. MCP is well known in significance tests for linear models and similarly exists in testing for feature importance in ML.
For example, when simultaneously testing the importance of 50 features, even if all features are unimportant, the probability of observing that at least one feature is significantly important is 0.923. Multiple comparisons will even be more problematic, the higher dimensional a dataset is. Since MCP is well known in statistics, the authors refer practitioners to existing overviews and discussions of alternative adjustment methods.
Practitioners are often interested in causal insights into the underlying data-generating mechanisms, which IML methods, in general, do not provide. Common causal questions include the identification of causes and effects, predicting the effects of interventions, and answering counterfactual questions. In the search for answers, researchers can be tempted to interpret the result of IML methods from a causal perspective.
However, a causal interpretation of predictive models is often not possible. Standard supervised ML models are not designed to model causal relationships but to merely exploit associations. A model may, therefore, rely on the causes and effects of the target variable as well as on variables that help to reconstruct unobserved influences.
Consequently, the question of whether a variable is relevant to a predictive model does not directly indicate whether a variable is a cause, an effect, or does not stand in any causal relation to the target variable.
As the researchers note, the challenge of causal discovery and inference remains an open key issue in the field of machine learning. Careful research is required to make explicit under which assumptions what insight about the underlying data generating mechanism can be gained by interpreting a machine learning model
Molnar et. al. offer an involved review of the pitfalls of global model-agnostic interpretation techniques for ML. Although as they note their list is far from complete, they cover common ones that pose a particularly high risk.
They aim to encourage a more cautious approach when interpreting ML models in practice, to point practitioners to already (partially) available solutions, and to stimulate further research.
Contrasting this highly involved and detailed groundwork to high-level hype and trends on explainable AI may be instructive.
- Machine Learning Answers: Facebook Stock Is Down 20% In A Month, What Are The Chances It'll Rebound? - Trefis - September 22nd, 2020
- Machine Learning in Education Market Incredible Possibilities, Growth Analysis and Forecast To 2025 - The Daily Chronicle - September 22nd, 2020
- Proximity matters: Using machine learning and geospatial analytics to reduce COVID-19 exposure risk - Healthcare IT News - September 22nd, 2020
- Global Machine Learning Market Tends To Show Steady Growth Post Pandemic With Regional Overview and Top Key Players - Verdant News - September 22nd, 2020
- PREDICTING THE OPTIMUM PATH - Port Strategy - September 22nd, 2020
- AI/ML Remains The Most In-Demand Tech Skill Post COVID - Analytics India Magazine - September 22nd, 2020
- Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its IHD Software - AiThority - September 15th, 2020
- Microchip Partners with Machine-Learning (ML) Software Leaders to Simplify AI-at-the-Edge Design Using its 32-Bit Microcontrollers (MCUs) - EE Journal - September 15th, 2020
- What is 'custom machine learning' and why is it important for programmatic optimisation? - The Drum - September 15th, 2020
- PODCAST: NVIDIA's Director of Data Science Talks Machine Learning for Airlines and Aerospace - Aviation Today - September 15th, 2020
- The Use of Machine Learning to Forecast Progression to Advanced AMD - DocWire News - September 15th, 2020
- How Can Machine Learning Help the Teaching Profession? - FE News - September 15th, 2020
- Global Machine Learning in Automobile Market: Development History, Current Analysis and Estimated Forecast to 2024 - The Market Correspondent - September 15th, 2020
- Using machine learning to organize the chemical diversity - Tech Explorist - September 15th, 2020
- Dashboard AI Announces Its Technology Vision for the Foodservice and Hospitality Industry - PRNewswire - September 15th, 2020
- Alfa Releases Second Paper on AI, Using Machine Learning in the Wild - Monitor Daily - September 10th, 2020
- Combatting COVID-19 misinformation with machine learning (VB Live) - VentureBeat - September 10th, 2020
- This artist used machine learning to create realistic portraits of Roman emperors - The World - September 10th, 2020
- Domino Data Lab Named a Leader in Notebook-Based Predictive Analytics and Machine Learning Evaluation by Global Research Firm - Business Wire - September 10th, 2020
- Demonstration Of What-If Tool For Machine Learning Model Investigation - Analytics India Magazine - September 10th, 2020
- RXA to Participate in 2nd Annual A2.AI Conference focused on Machine Learning & Applied AI - PR Web - September 10th, 2020
- 50 Data Science and Analysts Jobs That Opened Just Last Week - Analytics India Magazine - September 10th, 2020
- FSS Launches Next Gen Recon with Machine Learning and Cloud Support - TechGenyz - September 10th, 2020
- Getting to the heart of machine learning and complex humans - The Irish Times - August 28th, 2020
- Global Machine Learning Courses Market Trends, Key Driven Factors, Segmentation And Forecast To 2020-2026 - The Scarlet - August 28th, 2020
- AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider - August 28th, 2020
- UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin - August 26th, 2020
- Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review - August 26th, 2020
- The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook - August 26th, 2020
- Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -... - August 26th, 2020
- Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov - August 26th, 2020
- What is AutoML and Why Should Your Business Consider It - BizTech Magazine - August 26th, 2020
- Chatbots Are Machine Learning Their Way To Human Language - Forbes - August 26th, 2020
- Focusing on ethical AI in business and government - FierceElectronics - August 26th, 2020
- Amazon's Machine Learning University To Make Its Online Courses Available To The Public - Analytics India Magazine - August 14th, 2020
- Watch 3 Videos from Coursera's New "Machine Learning for Everyone" - Machine Learning Times - machine learning & data science news - The... - August 14th, 2020
- PhD Research Fellowship in Machine Learning for Cognitive Power Management job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 219138 -... - August 14th, 2020
- Machine learning is pivotal to every line of business, every organisation must have an ML strategy - BusinessLine - August 14th, 2020
- CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance - August 14th, 2020
- Why GPT-3 Heralds a Democratic Revolution in Tech - Built In - August 14th, 2020
- BMW, Red Hat, and Malong Share Insights on AI and Machine Learning During Transform 2020 - ENGINEERING.com - August 14th, 2020
- Algorithm created by deep learning finds potential therapeutic targets throughout the human genome - National Science Foundation - August 14th, 2020
- Ensighten Launches Client-Side Threat Intelligence Initiative and Invests in Machine Learning - WFMZ Allentown - August 6th, 2020
- Hey software developers, youre approaching machine learning the wrong way - The Next Web - August 6th, 2020
- Introducing The AI & Machine Learning Imperative - MIT Sloan - August 6th, 2020
- Who Does the Machine Learning and Data Science Work? - Customer Think - August 6th, 2020
- Artificial Intelligence and Machine Learning Path to Intelligent Automation - Embedded Computing Design - August 6th, 2020
- Blacklight Solutions Unveils Software to Simplify Business Analytics with AI and Machine Learning - PRNewswire - August 6th, 2020
- AI is learning when it should and shouldnt defer to a human - MIT Technology Review - August 6th, 2020
- Moderna Announced Partnership With Amazon Web Services for Their Analytics and Machine Learning Services - Science Times - August 6th, 2020
- Surprisingly Recent Galaxy Discovered Using Machine Learning May Be the Last Generation Galaxy in the Long Cosmic History - SciTechDaily - August 6th, 2020
- STMicroelectronics Releases STM32 Condition-Monitoring Function Pack Leveraging Tools from Cartesiam for Simplified Machine Learning - ELE Times - August 6th, 2020
- Machine Learning Reveals What Makes People Happy In A Relationship - Forbes - August 4th, 2020
- Benefits Of AI And Machine Learning | Expert Panel | Security News - SecurityInformed - August 4th, 2020
- Preparing new machine learning models used to take weeks Activeloop teams up with NVIDIA to reduce that time to hours - MENAFN.COM - August 4th, 2020
- IoT automation trend rides the next wave of machine learning, Big Data - Urgent Communications - August 4th, 2020
- Decoding Practical Problems and Business Implications of Machine Learning - Analytics Insight - August 4th, 2020
- Artificial Intelligence and Machine Learning Industry 2020 Market Manufacturers Analysis, Share, Size, Growth, Trends and Research Report 2026 -... - August 4th, 2020
- Could this software help users trust machine learning decisions? - C4ISRNet - July 27th, 2020
- Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know - insideBIGDATA - July 27th, 2020
- COVID-19 Impacts: Machine Learning Market will Accelerate at a CAGR of about 39% through 2020-2024 | The Increasing Adoption of Cloud-based Offerings... - July 27th, 2020
- Deep learning's role in the evolution of machine learning - TechTarget - July 1st, 2020
- 2 books to deepen your command of python machine learning - TechTalks - July 1st, 2020
- What I Learned From Looking at 200 Machine Learning Tools - Machine Learning Times - machine learning & data science news - The Predictive... - July 1st, 2020
- Protecting inventions which use Machine Learning and Artificial Intelligence - Lexology - July 1st, 2020
- Machine learning finds use in creating sharper maps of 'ecosystem' lines in the ocean - Firstpost - July 1st, 2020
- Fake data is great data when it comes to machine learning - Stacey on IoT - July 1st, 2020
- Decisions and NLP Logix Announce Partnership to bring the Power of Machine Learning to Business Process Management - Benzinga - July 1st, 2020
- Machine Learning in Medical Imaging Market Strategies and Insight Driven Transformation 2020-2030 - Cole of Duty - July 1st, 2020
- Impact of COVID-19 Outbreak on Artificial Intelligence and Machine Learning Market to Witness AIBrain, Amazon, Anki, CloudMinds - Cole of Duty - July 1st, 2020
- Machine Learning Market Projected to Register 43.5% CAGR to 2030 Intel, H2Oai - 3rd Watch News - July 1st, 2020
- Learn the business value of AI's various techniques - TechTarget - July 1st, 2020
- Machine Learning As A Service In Manufacturing Market Augmented Expansion to Be Registered by 2018-2023 - 3rd Watch News - July 1st, 2020
- COVID 19 Impact on Machine Learning in Medicine Market Outlook 2020 Industry Size, Top Key Manufacturers, Growth Insights, Demand Analysis and... - July 1st, 2020
- Machine learning algorithm from RaySearch enhances workflow at Swedish radiation therapy clinic - DOTmed HealthCare Business News - July 1st, 2020
- What a machine learning tool that turns Obama white can (and cant) tell us about AI bias - The Verge - June 25th, 2020
- AI and Machine Learning Are Changing Everything. Here's How You Can Get In On The Fun - ExtremeTech - June 25th, 2020
- SLAM + Machine Learning Ushers in the "Age of Perception - Robotics Business Review - June 25th, 2020
- Googles new ML Kit SDK keeps all machine learning on the device - SlashGear - June 25th, 2020
- Machine Learning vs Predictive Analytics: Are they same? - Analytics Insight - June 25th, 2020