Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance base metric #45

Open
faneshion opened this issue Feb 22, 2024 · 1 comment
Open

enhance base metric #45

faneshion opened this issue Feb 22, 2024 · 1 comment
Labels
enhancement New feature or request
Milestone

Comments

@faneshion
Copy link
Collaborator

faneshion commented Feb 22, 2024

Enhance the base metric for robust evaluation:

  1. add attribute: self.task to distinguish input format.
  2. add function: def _validate_data(self, input: Dataset) -> bool to check the validity of the data.
@faneshion faneshion added the enhancement New feature or request label Feb 22, 2024
@faneshion faneshion added this to the Version 0.1 milestone Feb 22, 2024
@faneshion
Copy link
Collaborator Author

faneshion commented Feb 23, 2024

The task will be removed outside of the Metric. We can divide all metrics into four categories: AnswerCorrectness, AnswerGroundedness, ContextRelevancy, and ContextAdequacy.

For each metric, we can decorate it to add the metric_type attribute:

def add_attribute(attribute_name, attribute_value):
    def decorator(cls):
        setattr(cls, attribute_name, attribute_value)
        return cls
    return decorator

# 使用装饰器定义类的属性
@add_attribute('mtype', 'AnswerGroundedness')
class _em_answer(Metric):
    pass

# 创建类的实例并访问属性
metric = rl.metrics._em_answer()
print(metric.mtype)  # 输出: AnswerGroundedness

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant