
Explainable AI solves the black box problem. This is the idea that some, in particular machine learning-based, AI systems are black boxes. We can see what goes into the blackbox, e.g. a photograph of a cat, and what comes out of the AI system, e.g. a labelled version of the photograph, but we don’t easily understand how and why the system made that determination.
For an intuitive mapping of explainability to criticality across different AI use cases, see this mapping by Everest Group: