Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    Concept-based Decision Tree Explanations
    Mutahar, Gayda Mohameed Q. ( 2021)
    This thesis evaluates whether training a decision tree based on concepts extracted from a concept-based explainer can increase interpretability for Convolutional Neu- ral Networks (CNNs) models and boost the fidelity and performance of the used explainer. CNNs for computer vision have shown exceptional performance in crit- ical industries. However, it is a significant barrier when deploying CNNs due to their complexity and lack of interpretability. Recent studies to explain computer vision models have shifted from extracting low-level features (pixel-based expla- nations) to mid-or high-level features (concept-based explanations). The current research direction tends to use extracted features in developing approximation al- gorithms such as linear or decision tree models to interpret an original model. In this work, we modify one of the state-of-the-art concept-based explanations and propose an alternative framework named TreeICE. We design a systematic evaluation based on the requirements of fidelity (approximate models to origi- nal model’s labels), performance (approximate models to ground-truth labels), and interpretability (meaningful of approximate models to humans). We conduct computational evaluation (for fidelity and performance) and human subject ex- periments (for interpretability). We find that TreeICE outperforms the baseline in interpretability and generates more human-readable explanations in the form of a semantic tree structure. This work features how important to have more understandable explanations when interpretability is crucial.