Introduction
Methodical procedure
Requirements for informed consent
Mapping concepts of explicability
Explicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions—to the extent possible—explainable to those directly and indirectly affected. Without such information, a decision cannot be duly contested. […] The degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate.
Hurdles for explicability
Levels of opacity | Explication | Causes | Corresponding level of explicability | Question | Solutions |
---|---|---|---|---|---|
1. Lack of disclosure | Data subjects are unaware of being subject to automated decision-making | Intentional corporate or state secrecy | Disclosure | Whether or not is an AI system applied? | Make code available for scrutiny, through regulatory means or algorithmic audit (carried out with or without corporate cooperation) |
2. General epistemic opacity | Originates from a general lack of understanding how AI systems learn, classify, and predict | Technical illiteracy | Intelligibility | How do AI systems generally work? | Promote computational thinking at all levels of education |
3. Specific epistemic opacity | Originates from a lack of understanding how a specific AI system learns, classifies, and predicts | Lack of understanding the rules an AI system follows | Interpretability | How does this specific AI system work? | Examine the capabilities and limitations of the specific AI system |
4. Explanatory opacity | Lack of causal explanation between input and output | The way algorithms operate at the scale of application | Explainability | Why does the AI system provide a specific output? | Promote code audits and explainable AI techniques |
Levels of explicability
Proposed ethical requirements for informed consent | Concepts in computer science | Ethical guiding questions | Ethical implications | |
---|---|---|---|---|
1. Disclosure | (Use of AI is assumed) | Was an AI system used? | Lack of disclosure | Obligation not to deceive |
2. Intelligibility | Intelligibility as decomposability | How do AI systems generally work (input, output, training data, parameter, calculation)? | General epistemic opacity (functioning of AI in general) | Obligation to communicate general risks deriving from the application of AI |
3. Interpretability | Simulatability as global post hoc interpretability | How does that specific AI system work (input, output, training data, parameter, calculation)? | Specific epistemic opacity (functioning of a specific AI system) | Obligation to identify individual and group risks deriving from a specific AI system |
4. Explainability | Explanation as local post hoc interpretability | Why did the AI system reach the particular decision that directly affects the patient? | Explanatory opacity | Obligation to be prepared to challenge or defend decisions (peer-disagreement) |
Disclosure
Intelligibility
Interpretability
-
Personal health data: information on the type and source of input data,
-
Bias: information on (a) the character of the training data; (b) how training data were categorized by domain experts; (c) how the AI model was tested,
-
Performance: information on (a) accuracy, specificity, and sensitivity; (b) how the performance was tested, and
-
Decision: information on the (a) degree of human or algorithmic agency in making decisions; (b) that physicians are responsible for the final diagnosis.
Explainability
Applying the levels of explicability
XAI approaches in computer science
Method | Examples | Intelligibility | Interpretability | Explainability |
---|---|---|---|---|
Inherently interpretable models | K‑nearest neighbor | x | x | x |
Feature visualization | Preferred stimuli | x | x | x |
Prototypes | MMD-Critic | – | x | x |
Counterfactuals | Counterfactuals | – | – | x |
Feature attribution | LIME, SHAP, saliency maps | – | – | x |