Skip to content
/ pcx Public

Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.

Notifications You must be signed in to change notification settings

maxdreyer/pcx

Repository files navigation

PCX Logo

Prototypical Concept-based Explanations (PCX)

PyTorch Implementation

Description

This repository contains the code for the paper "Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations''.

PCX Logo
Credits for images: pexels.com

Check out our project page here.

Note

We provide the code for all experiments here.

However, our aim for this repository is to provide an implementation that is as easy as possible, which we are currently working on.

Abstract

Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures.

Tutorial

We provide a tutorial on how to use the PCX framework on the ImageNet flamingo class. The tutorial is divided into the following steps:

  1. Installation
  2. Concept-based Explanations
  3. Prototypical Concept-based Explanations

Installation

First, clone the repository and install the required packages:

pip install -r requirements.txt

Note: We use python 3.8.10 for this tutorial.

Secondly, unzip the flamingo data samples that were retrieved from pexels.com:

unzip datasets/pexels/pexels_imgs.zip -d datasets/pexels/

Concept-based Explanations

To generate concept-based explanations using the CRP package for the ImageNet flamingo class, please run the following Jupyter notebook called tutorial_0_concept_explanation.ipynb.

Prototypical Concept-based Explanations

To generate prototypical concept-based explanations for the ImageNet flamingo class, please run the following Jupyter notebook called tutorial_1_pcx_explanation.ipynb.

Full Code for Paper

We provide the code for all experiments in the paper here.

Citation

Please feel free to cite our work, if used in your research:

@article{dreyer2023understanding,
  title={Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations},
  author={Dreyer, Maximilian and Achtibat, Reduan and Samek, Wojciech and Lapuschkin, Sebastian},
  journal={arXiv preprint arXiv:2311.16681},
  year={2023}
}

About

Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages