Supported Languages in Models - explained

Our filter models (excluding DeepSense) are trained on the English language, and message submissions in other languages could potentially block those messages (a false positive). Conversely, the filter will be unable to detect toxic and inappropriate language that is not in English.

At present, the DeepSense filter supports the following languages simultaneously:

  1. English
  2. Italian
  3. Spanish
  4. Russian
  5. Portuguese
  6. Turkish

This means that a community that speaks these languages can benefit from the filter to encourage inclusive language.

It is important to understand that processing several languages, without being able to choose which language the input text increases the model hardware requirements, and so communities that require multiple language support will be on a plan higher than premium (E.G Pro) in future, after beta.

In the future, if you know you only need filtering in a specific language, we may release a mode for a single language. If this is of interest to you please reach out to us.






Notes

Add notes for cautions, tips or advice in this Text Data macro.

Resources

  • Page:
    Custom Model Filter - explained (ProfanityBlocker Support)

    As a user of the profanity filtering service, you interact with two main components: the request history and your custom profanity dataset. Here's how they work from your perspective:

    Request History

    Whenever you use the service to check text for profanity, the system records the details of your request. This includes:

  • Page:
    Inviting and bot setup (ProfanityBlocker Support)

    There are two ways to add the bot to your Discord server:

    1. Invite link via Dashboard
    2. Invite link from third-party Discord bot list
  • Page:
    False Positive Reporting - explained (ProfanityBlocker Support)

    As a user of the profanity filtering service, you interact with two main components: the request history and your custom profanity dataset with the selected filter's base model. Refer to custom model filtering help to learn more about this.

    Here's how they work from your perspective:

    Request History

  • Page:
    Choosing which filter method to use - explained (ProfanityBlocker Support)

    When it comes to filtering for toxic language, whether that's profanity or content most people would be offended by there's no one size fits all for everyone. Given this fact, we've taken the approach of building and tuning different types of models for different scenarios.

    Here's a breakdown of the filter methods with its use cases explained: