Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Explainable Computer Vision with Unsupervised Concept-based Explanations
    ZHANG, Ruihan ( 2023-10)
    This thesis focuses on concept-based explanations for deep learning models in the computer vision domain with unsupervised concepts. The success of deep learning methods significantly improves the performance of computer vision models. However, the quickly growing complexity of the models makes explainability a more important research focus. One of the major issues in computer vision explainability is that it is unclear what the appropriate features are that can be used in the explanations. Pixels are less understandable features compared with other domains like natural language processing with words as features. In recent years, concepts, that refer to the shared knowledge between human and AI systems with feature maps inside the deep learning model provide significant performance improvement as features in the explanations. Concept-based explanations become a good choice for explainability in computer vision. In most tasks, the supervised concept is the standard choice with better performance. Nevertheless, the concept learning task in supervised concept-based explanations additionally requires a dataset with a designed concept set and instance-level concept labels. Unsupervised concepts could reduce manual workload. In this thesis, we aim to reduce the performance gap between unsupervised and supervised concepts for concept-based explanations in computer vision. Targeting the baseline of concept bottleneck models (CBM) with supervised concepts, combined with the advances that unsupervised concepts do not require the concept set designing and labeling, the core contributions in this thesis make the unsupervised concepts an attractive alternative choice for concept-based explanations. Our core contributions are as follows: 1) We propose a new concept learning algorithm, invertible concept-based explanations (ICE). Explanations with unsupervised concepts can be evaluated with fidelity to the original model, like explanations with supervised concepts. Learned concepts are evaluated to be more understandable than baseline unsupervised concept learning methods like k-means clustering methods from ACE; 2) We propose a general framework of concept-based interpretable models with built-in faithful explanations similar to CBM. The framework makes the comparison between supervised and unsupervised concepts available. We show that unsupervised concepts provide competitive performance with model accuracy and concept interpretability; 3) We propose an example of applications using unsupervised concepts with counterfactual explanations, the fast concept-based counterfactual explanations (FCCE). In the ICE concept space, we propose the analytical solution to the counterfactual loss function. The calculation of counterfactual explanations in concept space takes less than 1e-5 seconds. Also, the FCCE is evaluated to be more interpretable through a human survey. In conclusion, previously, unsupervised concepts are not a choice for concept-based explanations as they suffer from many issues, such as being less interpretable and faithful to supervised concept-based explanations like CBM. With all our core contributions, the accuracy and interoperability performance of unsupervised concepts for concept-based explanations is improved to be competitive with supervised concept-based explanations. Since no extra requirements of concept set design and labeling are required, unsupervised concepts are an attractive choice for concept-based explanations in computer vision with competitive performance to supervised concepts. They also bring the benefit that no manual workload of concept set design and labeling is required.
  • Item
    Thumbnail Image
    Explainable Computer Vision with Unsupervised Concept-based Explanations
    ZHANG, Ruihan ( 2023-10)
    This thesis focuses on concept-based explanations for deep learning models in the computer vision domain with unsupervised concepts. The success of deep learning methods significantly improves the performance of computer vision models. However, the quickly growing complexity of the models makes explainability a more important research focus. One of the major issues in computer vision explainability is that it is unclear what the appropriate features are that can be used in the explanations. Pixels are less understandable features compared with other domains like natural language processing with words as features. In recent years, concepts, that refer to the shared knowledge between human and AI systems with feature maps inside the deep learning model provide significant performance improvement as features in the explanations. Concept-based explanations become a good choice for explainability in computer vision. In most tasks, the supervised concept is the standard choice with better performance. Nevertheless, the concept learning task in supervised concept-based explanations additionally requires a dataset with a designed concept set and instance-level concept labels. Unsupervised concepts could reduce manual workload. In this thesis, we aim to reduce the performance gap between unsupervised and supervised concepts for concept-based explanations in computer vision. Targeting the baseline of concept bottleneck models (CBM) with supervised concepts, combined with the advances that unsupervised concepts do not require the concept set designing and labeling, the core contributions in this thesis make the unsupervised concepts an attractive alternative choice for concept-based explanations. Our core contributions are as follows: 1) We propose a new concept learning algorithm, invertible concept-based explanations (ICE). Explanations with unsupervised concepts can be evaluated with fidelity to the original model, like explanations with supervised concepts. Learned concepts are evaluated to be more understandable than baseline unsupervised concept learning methods like k-means clustering methods from ACE; 2) We propose a general framework of concept-based interpretable models with built-in faithful explanations similar to CBM. The framework makes the comparison between supervised and unsupervised concepts available. We show that unsupervised concepts provide competitive performance with model accuracy and concept interpretability; 3) We propose an example of applications using unsupervised concepts with counterfactual explanations, the fast concept-based counterfactual explanations (FCCE). In the ICE concept space, we propose the analytical solution to the counterfactual loss function. The calculation of counterfactual explanations in concept space takes less than 1e-5 seconds. Also, the FCCE is evaluated to be more interpretable through a human survey. In conclusion, previously, unsupervised concepts are not a choice for concept-based explanations as they suffer from many issues, such as being less interpretable and faithful to supervised concept-based explanations like CBM. With all our core contributions, the accuracy and interoperability performance of unsupervised concepts for concept-based explanations is improved to be competitive with supervised concept-based explanations. Since no extra requirements of concept set design and labeling are required, unsupervised concepts are an attractive choice for concept-based explanations in computer vision with competitive performance to supervised concepts. They also bring the benefit that no manual workload of concept set design and labeling is required.