From 24ed45268a0c616d9d7e342bf3c460e4aaac0035 Mon Sep 17 00:00:00 2001 From: Jesse Vig <45317205+jessevig@users.noreply.github.com> Date: Sat, 2 Apr 2022 05:50:36 -0700 Subject: [PATCH] Update README to remove neuron view single instance limitation. --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 45d9cf6..3fc0ad4 100644 --- a/README.md +++ b/README.md @@ -203,7 +203,7 @@ GPT-2 ([Notebook](notebooks/neuron_view_gpt2.ipynb), RoBERTa ([Notebook](notebooks/neuron_view_roberta.ipynb)) -Note that only one instance of the Neuron View may be displayed within a notebook. For full API, please refer to the [source](bertviz/neuron_view.py). +For full API, please refer to the [source](bertviz/neuron_view.py). ### Encoder-decoder models (BART, T5, etc.) @@ -406,7 +406,7 @@ returned from Huggingface models). In some case, Tensorflow checkpoints may be l * When running on Colab, some of the visualizations will fail (runtime disconnection) when the input text is long. To mitigate this, you may wish to filter the layers displayed by setting the **`include_layers`** parameter, as described [above](#filtering-layers). * The *neuron view* only supports the custom BERT, GPT-2, and RoBERTa models included with the tool. This view needs access to the query and key vectors, which required modifying the model code (see `transformers_neuron_view` directory), which has only been done for these three models. -Also, only one neuron view may be included per notebook. + ### Attention as "explanation"? * Visualizing attention weights illuminates one type of architecture within the model but does not