Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add distinct_1/2 metric #108

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

moshesbeta
Copy link
Collaborator

I add a new evaluation metric, Distinct 1/2, for the generate task evaluation. I have uploaded the new scripts "_answer_distinct12.py" and "test_answer_distinct12.py", and the modified version of "init.py".

import datasets
from nltk import ngrams
from rageval.metrics import Metric, add_attribute

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import/from is not standardized here

"hypothesis": datasets.Value("string"),
}
),
codebase_urls=["https://github.com/Hannibal046/SelfMemory/blob/58d8b611ad51605091c7555c0f32dce6702dadbf/src/utils/metrics_utils.py"],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reference link is better to be replaced by https://github.com/Hannibal046/SelfMemory/blob/main/src/utils/metrics_utils.py



@dataclass
@add_attribute('mtype', 'Diversity')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe Diversity is not a good metric type

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This metric is lacked of a unit test file.

_answer_precision is not a suitable file name. It is recommended to change it to _answer_perplexity

from rageval.metrics import Metric, add_attribute


_CITATION = """\
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This citation doesn't seem to be correct.

longer than the max input length of the model, then it is truncated to the
max length for the perplexity computation.

Examples:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it difficult to pass CI tests in an environment without a GPU.

import evaluate
from evaluate import logging
from rageval.metrics import Metric, add_attribute

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import/from is not standardized here

self,
predictions: List[str],
pipeline,
) -> Tuple[float, List[float]]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the entry point of the _compute() method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants