the-algorithm/trust_and_safety_models
Maria Novik 96b37a8f1b fix: fix f-string syntax error by removing redundant '}', apply Black code formatting, and change size argument to fontsize in plt.title() function call 2023-04-02 14:48:48 +02:00
..
abusive Twitter Recommendation Algorithm 2023-03-31 17:36:31 -05:00
nsfw fix: fix f-string syntax error by removing redundant '}', apply Black code formatting, and change size argument to fontsize in plt.title() function call 2023-04-02 14:48:48 +02:00
toxicity Twitter Recommendation Algorithm 2023-03-31 17:36:31 -05:00
README.md Twitter Recommendation Algorithm 2023-03-31 17:36:31 -05:00

README.md

Trust and Safety Models

We decided to open source the training code of the following models:

  • pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.
  • pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics
  • pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter terms of service
  • pAbuse: Model to detect abusive content. This includes violations of Twitter terms of service, including hate speech, targeted harassment and abusive behavior.

We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.