Skip to content

erickfm/Neutrally

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

Neutrally

Text-to-text bias neutralization

This model is a fine-tuned checkpoint of large language model T5-base. Fine-tuned on the Wiki Neutrality Corpus (WNC), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model achieves state of the art (SOTA) performance with a BLEU score of 94.08 and an accuracy of 48.37 on a test split of the WNC, narrowly beating out previous SOTA work from Pryzant et al.

For more details about BLEU, see this wiki.
For more details about this project visit our web app.


Train it on Colab

Try it out on HuggingFace

About

Text-to-text bias neutralization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published