Content Moderation

Use Cleanlab to quickly improve your content moderation datasets, models, and processes.

Find labeling errors; Decide when another review is appropriate; Discover good/bad moderators; Deploy robust ML models with 1-click.
Hero Picture

Case StudyToxic Language Detection @ VAST-OSINT

Quote from David Knickerbocker CTO of VAST-OSINT

A while back, I made a toxic language classifier. However, I was unsatisfied with the training data, […] I split the text by sentences while retaining the original label, hoping I'd be able to quickly clean-up, but that didn't work well.

I took the sentence-labeled training data and threw it at cleanlab to see how well confident learning could identify the incorrect labels. These results look amazing to me.

If nothing else, this can help identify training data to TOSS if you don't want to automate correction.

VAST-OSINT is on a quest to tame the web into safe, secure and on-demand data streams to help customers isolate and remediate misinformation, detect influence operations, and keep your companies and customers safer.
Company Logo

Case StudyShareChat

  ·  Cleanlab automatically identified an error rate of 3% in the concept categorization process for content in the Moj video-sharing app. Shown are a couple mis-categorized examples that Cleanlab detected in the app.

  ·  For this dataset, Cleanlab Studio’s AutoML automatically produced a more accurate visual concept classification model (56.7% accuracy) than ShareChat’s in-house 3D ResNet model (52.6% accuracy). Auto-correcting the dataset immediately boosted Cleanlab AutoML accuracy up to 58.9% (see barchart below).

Graph
ShareChat is India’s largest native social media app with over 300 million users. The company employs large teams of content moderators to categorize user video content in many ways.
Company Logo
Graph showing results achieved with Cleanlab on a real dataset

HOW CLEANLAB HELPS YOU BETTER MODERATE CONTENT

Icon

Videos on using Cleanlab Studio to find and fix incorrect values in:

Icon

Train and deploy state-of-the-art content moderation/categorization models (with well-calibrated uncertainty estimates) in 1-click. Cleanlab Studio automatically applies the most suitable Foundation/LLM models and AutoML systems for your content. Learn more

Icon

Quickly find and fix issues in a content dataset (categorization errors, outliers, ambiguous examples, near duplicates) — and then easily deploy a more reliable ML model. Read More

Icon

Determine which of your moderators is performing best/worst overall. Read More

Icon

Confidently make model-assisted moderation decisions in real-time, deciding when to flag content for human review, and when to request a second/third review (for hard examples). Read More

Icon

Read about analyzing politeness labels provided by multiple data annotators.

Icon

Read about automatic error detection for image/text tagging datasets.