Skip to content

Commit

Permalink
Minor docs fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
tsterbak committed Aug 11, 2023
1 parent c732bc8 commit a50f5bd
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ print("Test accuracy: {:.2%}".format(score))

## Counterfactual token based bias detection

Now that we have a model to test, lets evaluate it with the Biaslyze tool and test for bias with regards to the concept 'religion'.
Now that we have a model to test, lets evaluate it with the biaslyze tool and test for bias with regards to the concept 'religion'.


```python
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tutorials/tutorial-hugging-hatexplain.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ hate_clf = HateSpeechClf()



Now we can check the model for bias indications within the relevant concepts provided by biaslyze. If you want to work with your own concepts or add to the given ones, please check out the tutorial on [how to use custom concepts](https://www.biaslyze.org/tutorials/tutorial-working-with-custom-concepts/).
Now we can check the model for bias indications within the relevant concepts provided by biaslyze. If you want to work with your own concepts or add to the given ones, please check out the tutorial on [how to use custom concepts](../../tutorials/tutorial-working-with-custom-concepts/).


```python
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ print("Test accuracy: {:.2%}".format(score))

## Counterfactual token based bias detection

Now that we have a model to test, lets evaluate it with the Biaslyze tool and test for bias with regards to the concept 'religion'.
Now that we have a model to test, lets evaluate it with the biaslyze tool and test for bias with regards to the concept 'religion'.


```python
Expand Down
2 changes: 1 addition & 1 deletion docs/templates/tutorials/tutorial-hugging-hatexplain.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ hate_clf = HateSpeechClf()



Now we can check the model for bias indications within the relevant concepts provided by biaslyze. If you want to work with your own concepts or add to the given ones, please check out the tutorial on [how to use custom concepts](https://www.biaslyze.org/tutorials/tutorial-working-with-custom-concepts/).
Now we can check the model for bias indications within the relevant concepts provided by biaslyze. If you want to work with your own concepts or add to the given ones, please check out the tutorial on [how to use custom concepts](../../tutorials/tutorial-working-with-custom-concepts/).


```python
Expand Down

0 comments on commit a50f5bd

Please sign in to comment.