Skip to content

Commit

Permalink
Evaluated Prompt & added uniform statistics
Browse files Browse the repository at this point in the history
  • Loading branch information
Vi Thien Le authored and Vi Thien Le committed Dec 13, 2023
1 parent 77ad6d3 commit 7ffdc40
Show file tree
Hide file tree
Showing 14 changed files with 184 additions and 114 deletions.
135 changes: 90 additions & 45 deletions concept_linking/data/files/EvaluationData/Results/Prompt_DK.json

Large diffs are not rendered by default.

155 changes: 90 additions & 65 deletions concept_linking/data/files/EvaluationData/Results/Prompt_EN.json

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
4 changes: 2 additions & 2 deletions concept_linking/tools/Evaluation/Evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ def read_scores_from_json(file_path):
return scores


def evaluate_dataset(scores):
def evaluate_dataset(scores, title):
# Counter for "?" occurrences
question_mark_count = 0

Expand All @@ -36,7 +36,7 @@ def evaluate_dataset(scores):
plt.text(1.01, 0.75, f"SD: {std_dev:.2f}", transform=plt.gca().transAxes)
plt.text(1.01, 0.7, f"?: {question_mark_count} occurrences", transform=plt.gca().transAxes)

plt.title('Distribution of Points: MAGAG on Danish Dataset')
plt.title(title)
plt.xlabel('Score')
plt.ylabel('Frequency')
plt.xticks(np.arange(0, 1.1, 0.1))
Expand Down
4 changes: 2 additions & 2 deletions concept_linking/tools/Evaluation/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@


if __name__ == "__main__":
name_of_result = "String_DK"
name_of_result = "ML_DK"
json_file_path = os.path.join(PROJECT_ROOT, "data/files/EvaluationData/Results/" + name_of_result + ".json")

scores = read_scores_from_json(json_file_path)
evaluate_dataset(scores)
evaluate_dataset(scores,'Distribution of Points: Machine Learning Solution on Danish Dataset')

0 comments on commit 7ffdc40

Please sign in to comment.