John Snow Labs NLP Test 1.4.0: Enhancing Support for Toxicity test and new QA benchmark datasets (NarrativeQA, TruthfulQA, QuAC, HellaSwag, MMLU and OpenbookQA) #501
ArshaanNazir
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
John Snow Labs NLP Test 1.4.0: Enhancing Support for Toxicity test and new QA benchmark datasets (NarrativeQA, TruthfulQA, QuAC, HellaSwag, MMLU and OpenbookQA)
📢 Overview
NLP Test 1.4.0 🚀 comes with brand new features, including: new capabilities for testing Large Language Models for toxicity and support for new QA benchmark datasets (NarrativeQA, TruthfulQA, QuAC, HellaSwag, MMLU and OpenbookQA) for robustness, representation, fairness and accuracy tests. It also includes addition of some new robustness tests and many other enhancements and bug fixes!
A big thank you to our early-stage community for their contributions, feedback, questions, and feature requests 🎉
Make sure to give the project a star right here ⭐
🔥 New Features & Enhancements
❓ How to Use
Get started now! 👇
Create your test harness in 3 lines of code 🧪
📖 Documentation
❤️ Community support
#nlptest
channelWe would love to have you join the mission 👉 open an issue, a PR, or give us some feedback on features you'd like to see! 🙌
♻️ Changelog
What's Changed
New Contributors
Full Changelog: v1.3.0...v1.4.0
This discussion was created from the release John Snow Labs NLP Test 1.4.0: Enhancing Support for Toxicity test and new QA benchmark datasets (NarrativeQA, TruthfulQA, QuAC, HellaSwag, MMLU and OpenbookQA).
Beta Was this translation helpful? Give feedback.
All reactions