Foundation and Large Language Models

Fix data issues to better evaluate, fine-tune, or train foundation models. Cleanlab helps you to practice data curation like the best generative AI teams.

Beyond improving the dataset, Cleanlab Studio allows you to train and deploy foundation models on messy real-world data with a few clicks. The AI that Cleanlab uses to detect issues in your dataset is powered by such models which are automatically fit your data.
Hero Picture

Practice data curation like the best generative AI teams

What’s the common thread across leading organizations with the best AI models? Relentless focus on data curation rather than inventing novel models or training algorithms.

Cleanlab software helps you effectively curate data without large teams of experts. Easily build your own data engine like those that power leading AI teams!

Since training data shapes the capabilities of any learned model, data filtering is a powerful tool for limiting undesirable model capabilities.

We prioritized filtering out all of the bad data over leaving in all of the good data. This is because we can always fine-tune our model with more data later to teach it new things, but it’s much harder to make the model forget something that it has already learned.

— OpenAI blog on DALLE-2, describing how they produce one of the best available generative image models.

“If you teach the model something wrong, it will lock that in and forever believe that wrong thing.”

I was not prepared for how sensitive some of these models are.

— Aidan Gomez (Founder & CEO of Cohere) speaking on the data sensitivity of LLM training on Weights and Biases podcast

Data is the new oil but it's got to be clean data.

The valuable data here is the content that is structured in the world that allows you to learn principles as opposed to again, the big data age where it was about as much data as possible to extract patterns

— Emad Mostaque, founder and CEO of Stability.ai on Infinite Loops podcast

I do believe all the great labs are actually pouring huge amounts of energy into cleaning their data

— Nat Friedman, former CEO of GitHub, investor in hundreds of AI startups via Stratechery

At Tesla, I spend most of my time just massaging the datasets, and this takes a huge amount of work/effort and you want to do it extremely well.

— Andrej Karpathy, former Director of AI at Tesla, co-founder of OpenAI at Spark+AI Summit

Read more about this topic: Data Engine Design by George Pearse, ML Engineer at Binit.AI
When fine-tuning OpenAI GPT models in a text classification task (politeness prediction), correcting label errors with Cleanlab Studio improved test accuracy by 37% without any change to the modeling/fine-tuning code (solely the dataset was modified). Read more.
Image
Effortlessly detect errors in reinforcement from human feedback data (RLHF). Here is an example of a human error in the Anthropic RLHF dataset found with Cleanlab Studio, where the human-rejected LLM output (completion) is unequivocally better than the human-chosen LLM output (completion). The human who provided feedback just accidentally made a mistake! Read more.
Image
Automatically flag low-quality examples for any image dataset. Cleanlab software can report which images are: blurry, under/over-exposed, oddly sized, low information, or (near) duplicates of others. Handling such issues is important in generative AI and computer vision (especially to diagnose spurious correlations). Read more.
Image
Accelerate data labeling for Transformer Models. ActiveLab greatly reduces time and labeling costs to achieve a given model performance compared to standard data annotation. For example, ActiveLab hits 90% model accuracy at only 35% of the label spend as standard training. Read more.
Image