From 7b1e3c074b8bd0da1bf091c5a13658ce41801b27 Mon Sep 17 00:00:00 2001 From: Nicy Scaria <68124740+nicyscaria@users.noreply.github.com> Date: Wed, 4 Sep 2024 08:45:40 +0000 Subject: [PATCH] deleted deeplearning notes --- _pages/research.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/_pages/research.md b/_pages/research.md index d5ecfb6d024d6..52e039efa0491 100644 --- a/_pages/research.md +++ b/_pages/research.md @@ -11,9 +11,6 @@ author_profile: true {% endif %} - - - ### ✅ Assessment of Large Language Models’ Ability to Generate Relevant and High-Quality Questions at Different Bloom’s Skill Levels We examined the ability of five state-of-the-art LLMs to generate relevant and high-quality questions of different cognitive levels, as defined by Bloom's taxonomy. We prompted each model with the same instructions and different contexts to generate 510 questions. Two human experts used a ten-item rubric to assess the linguistic and pedagogical relevance and quality of the questions. Our findings suggest that LLMs can generate relevant and high-quality educational questions of different cognitive levels, making them useful for creating assessments.