From 081d09ae2839e648d24b581a2ad99f9e9ba9c211 Mon Sep 17 00:00:00 2001 From: Nicy Scaria <68124740+nicyscaria@users.noreply.github.com> Date: Wed, 4 Sep 2024 08:30:20 +0000 Subject: [PATCH] update deepleaning notes --- _pages/research.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/_pages/research.md b/_pages/research.md index b20ae6d6b53f4..2d5f169417586 100644 --- a/_pages/research.md +++ b/_pages/research.md @@ -12,6 +12,8 @@ author_profile: true {% endif %} +This [website][/research/] is awesome. + ### ✅ Assessment of Large Language Models’ Ability to Generate Relevant and High-Quality Questions at Different Bloom’s Skill Levels We examined the ability of five state-of-the-art LLMs to generate relevant and high-quality questions of different cognitive levels, as defined by Bloom's taxonomy. We prompted each model with the same instructions and different contexts to generate 510 questions. Two human experts used a ten-item rubric to assess the linguistic and pedagogical relevance and quality of the questions. Our findings suggest that LLMs can generate relevant and high-quality educational questions of different cognitive levels, making them useful for creating assessments.