A Simple Guide to Machine Learning Interpretability

0

The fields of machine learning (ML) and artificial intelligence (AI) have developed considerably over the past 20 years, thanks to advances in software, hardware and algorithms.

In the early 1950s, AI meant the use of expert rules that a machine had to follow to reach a particular conclusion. These rules were often painstakingly coded by experts (hence the name “expert rules”) and were not flexible to be applied to different applications.

ML, considered a subset of AI, became popular primarily due to the abundance of data, as organizations were able to store more data at lower cost. Data mining, considered a subset of ML, involved two main strategies for extracting data in order to derive some value from it: identifying patterns such as clusters of data points (unsupervised learning) and identifying the correlations between the input variables in the data and an outcome of interest. (for example, information about a loan applicant is correlated to the outcome of the loan approval).

Over the years, several machine learning models have been proposed and shown promise in multiple applications, leading to new disciplines related to ML usability, such as fairness, federated learning, explainability and interpretability.

The discipline of interpretable ML (MLI) has grown in importance due to the need to bridge the gap between demand and supply of ML systems. While on the supply side, ML technologies such as image processing and natural language processing are much better able to make highly accurate predictions, on the demand side, decision makers are still uncertain about the veracity of the output of ML applications in the real world.

Several examples have been highlighted regarding the decision errors of ML systems, such as the false incrimination of innocent people due to errors in face detection software and the denial of credit card applications to valid candidates in due to systematic bias in the data. Interpretability has been proposed as the key to connecting the two worlds of ML algorithm development and real-world implementation.

There are simpler ML models such as decision tree which provide rules like “if age > 25 and salary > 10 lakh per year then approve credit card application”, or regression linear, which provides correlation coefficients such as “increasing fertilizer use X by one unit increases crop yield by four units”.

On the other hand, there are also deep neural networks that work similarly to the human brain by connecting neurons through thousands of layers to make a decision.

Intuitively, these work better in decision making than linear equations or rule-based models because they can capture a wider range of hidden relationships embedded in the data. The former are considered as transparent ML models, while the latter class of higher-performance MLs is referred to as a “black box” model.

Current efforts in PWM are directed in two main directions – generating transparent models that approximate the prediction performance of black box models and developing methods that can explain the decisions made by black box models.

Just a decade ago, the majority of ML researchers were primarily concerned with creating more accurate models in predictions, but today there is a significant proportion of interest in explaining the justification for predictions as well.

Surrogate modeling has become a popular direction of inquiry in which a black box model is first trained to make optimal predictions, followed by another model being used to explain the predictions made by the first model.

Alternative approaches include counterfactual analysis which examines the change in conditions that cause a decision made by an ML model to be reversed, and game theory, where each piece of information is compared to another, analogously. decide on the individual contributions of team members working toward a single goal.

Although computer scientists and mathematicians are eagerly working to come up with new transparent models that can augment or replace existing complex and opaque ML models, some fundamental aspects of IML are very little explored. Philosophical and ontological questions such as “what is interpretability”, “how do humans interpret systems”, “to what extent do we need to interpret machine learning models and systems of AI” are questions requiring a broader range of skills and expertise beyond mathematical formulations and code implementations.

Perhaps understanding the extent of the human need for interpretability can ultimately help us reconcile (on a pessimistic note) or stop (on an optimistic note) the singularity, the much prophesied hypothetical situation where technological growth spirals out of control, nullifying the need for human beings.

This article was published as part of Swasti 22, the Swarajya Science and Technology Initiative 2022. Read other Swasti 22 submissions.

Read also :

Quantum computers: why they’re hard to build — and worth it

Share.

Comments are closed.