Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

On Explaining Decision Trees

Abstract : Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models. This is informally motivated by paths in DTs being often much smaller than the total number of features. This paper shows that in some settings DTs can hardly be deemed interpretable, with paths in a DT being arbitrarily larger than a PI-explanation, i.e. a subset-minimal set of feature values that entails the prediction. As a result, the paper proposes a novel model for computing PI-explanations of DTs, which enables computing one PI-explanation in polynomial time. Moreover, it is shown that enumeration of PI-explanations can be reduced to the enumeration of minimal hitting sets. Experimental results were obtained on a wide range of publicly available datasets with well-known DT-learning tools, and confirm that in most cases DTs have paths that are proper supersets of PI-explanations.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03312480
Contributor : Yacine Izza Connect in order to contact the contributor
Submitted on : Monday, August 2, 2021 - 2:52:26 PM
Last modification on : Tuesday, October 19, 2021 - 2:23:20 PM

File

paper.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

  • HAL Id : hal-03312480, version 1
  • ARXIV : 2010.11034

Citation

Yacine Izza, Alexey Ignatiev, João Marques Silva. On Explaining Decision Trees. 2021. ⟨hal-03312480⟩

Share

Metrics

Record views

64

Files downloads

25