Skip to content

kangyeolk/Awesome-XAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Awesome-XAI

1. Visualization of CNN representations

1-1. Gradient-based method

Compute gradient of the score or intermediate neuron with respect to the input image.

  • Feature Visualization [page]
    Olah et al. 2017
  • Understanding Neural Networks Through Deep Visualization [paper]
    Jason Yosinski et al., 2015
  • Striving for Simplicity: The All Convolutional Net [paper]
    Jost Tobias Springenberg et al. 2015
  • Understanding Deep Image Representations by Inverting Them [paper]
    Aravindh Mahendran et al., 2015
  • Deep Inside Convolutional Network: Visualising Image Classification Models and Saliency Maps [paper]
    Karen Simonyan et al., 2013
  • Visualizing and Understanding Convolutional Networks [paper]
    Matthew D Zeiler et al., 2013

1-2. Feature maps: Visualization, Interpretation

  • Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space [paper]
    Anh Nguyen et al., 2017
  • Inverting Visual Representations with Convolutional Networks [paper]
    Alexey Dosovitskiy et al., 2016
  • Object Detectors Emerge in Deep Scene CNNs [paper]
    Bolei Zhou et al., 2015

2. Diagnosis of CNN representations

2-1. Analyzing CNN features

  • Understanding Deep Features with Computer-generated Imagery [paper]
    Mathieu Aubry et al., 2015
  • How Transferable are Features in Deep Neural Networks [paper]
    Jason Yosinski et al., 2014
  • Going Deeper with Convolutions [paper]
    Christian Szegedy et al., 2014

2-2. Extracting image regions from the network output

  • Interpretable Explanations of Black Boxes by Meaningful Perturbation [paper]
    Ruth Fong et al., 2017
  • Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization [paper]
    Ramprasaath R. Selvaraju et al., 2017
  • Visualizing Deep Neural Network Decisions: Prediction Difference Analysis [paper]
    Luisa M Zintgraf et al., 2017
  • The (Un)reliability of saliency methods [paper]
    Pieter-Jan Kindermans et al., 2017
  • "Why Should I Trust You?": Explaining the Predictions of Any Classifier [paper]
    Marco Tulio Ribeiro et al., 2016

2-3. Vulnerability of Deep Neural Net Model

  • Understanding Block-box Predictions via Influence Functions [paper]
    Pang Wei Koh et al., 2017
  • One Pixel Attack for Fooling Deep Neural Networks [paper]
    Jiawei Su et al., 2017

2-4. Refining Network Representations

  • Harnessing Deep Neural Networks with Logic Rules [paper]
    Zhiting Hu et al., 2016

2-5. Discovering Bugs in Neural Networks

  • Examing CNN Representations with respect to Dataset Bias [paper]
    Quanshi Zhang et al., 2017

3. Disentangling CNN representations into

  • Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning [paper]
    Quanshi Zhang et al.,2016
  • Interpreting CNN Knowledge Via An Explanatory Graph [paper]
    Quanshi Zhang et al.,2018
  • Interpreting CNNs via Decision Trees [paper]
    Quanshi Zhang et al.,2018
  • Interpret Neural Networks by Identifying Critical Data Routing Paths [paper]
    Yulong Wang et al.,2018

4. Building explainable models

Modifying model structure to interpret model

  • Interpretable Convolution Neural Networks [paper]
    Quanshi Zhang et al.,2018
  • Towards Interpretable R-CNN by Unfolding Latent Structures [paper] Tianfu Wu et al.,2018
  • Dynamic Routing Between Capsules [paper]
    Sara Sabour et al.,2017
  • InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [paper]
    Xi Chen et al.,2016

5. Evaluation metrics for network interpretability

5-1. Filter interpretability

  • Network Dissection: Quantifying Interpretability of Deep Visual Representations [paper]
    David Bau et al., 2017

5-2. Location instability

  • Interpreting CNN Knowledge Via An Explanatory Graph [paper]
    Quanshi Zhang et al.,2018

99. Survey Papers

  • Visual Interpretability for Deep Learning: a Survey [paper]
    Quanshi Zhang,Song-Chun Zhu, 2018

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published