Skip to content

Guardrails AI: Profanity free - Validates that a translated text does not contain profanity language.

License

Notifications You must be signed in to change notification settings

guardrails-ai/profanity_free

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

Developed by Guardrails AI
Date of development Feb 15, 2024
Validator type Brand risk
Blog -
License Apache 2
Input/Output Output

Description

This validator ensures that there’s no profanity in any generated text. This validator uses the alt-profanity-check package to check if a string contains profane language.

Intended use

This validator catches profanity in the English language only.

Resources required

  • Dependencies: alt-profanity-check

Installation

guardrails hub install hub://guardrails/profanity_free

Usage Examples

Validating string output via Python

In this example, we apply the validator to a string output generated by an LLM.

# Import Guard and Validator
from guardrails.hub import ProfanityFree
from guardrails import Guard

# Use the Guard with the validator
guard = Guard().use(ProfanityFree, on_fail="exception")

# Test passing response
guard.validate(
    """
    Director Denis Villeneuve's Dune is a visually stunning and epic adaptation of the classic science fiction novel.
    It is reminiscent of the original Star Wars trilogy, with its grand scale and epic storytelling.
    """
)

try:
    # Test failing response
    guard.validate(
        """
        He is such a dickhead and a fucking idiot.
        """
    )
except Exception as e:
    print(e)

Output:

Validation failed for field with errors: 
    He is such a dickhead and a fucking idiot.
contains profanity. Please return profanity-free output.

API Reference

__init__(self, on_fail="noop")

    Initializes a new instance of the Validator class.

    Parameters:

    • on_fail (str, Callable): The policy to enact when a validator fails. If str, must be one of reask, fix, filter, refrain, noop, exception or fix_reask. Otherwise, must be a function that is called when the validator fails.

__call__(self, value, metadata={}) -> ValidationResult

    Validates the given value using the rules defined in this validator, relying on the metadata provided to customize the validation process. This method is automatically invoked by guard.parse(...), ensuring the validation logic is applied to the input data.

    Note:

    1. This method should not be called directly by the user. Instead, invoke guard.parse(...) where this method will be called internally for each associated Validator.
    2. When invoking guard.parse(...), ensure to pass the appropriate metadata dictionary that includes keys and values required by this validator. If guard is associated with multiple validators, combine all necessary metadata into a single dictionary.

    Parameters:

    • value (Any): The input value to validate.
    • metadata (dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.

About

Guardrails AI: Profanity free - Validates that a translated text does not contain profanity language.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages