How the Text2Sparql Model in Tutorial10_Knowledge_Graph was trained #1146
-
Hello haystack, I am interested to learn more about how the Text2Sparql model in the Tutorial10_Knowledge_Graph tutorial was trained. If you can direct me to the right place, I will appreciate it. Eliran |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 9 replies
-
Hi @eboraks the training of such a model is not supported in Haystack but we give some hints how to do it here in the last paragraph at the very bottom of the page: https://haystack.deepset.ai/docs/latest/knowledgegraphmd You would need to train a BART model with transformers to translate text questions to queries in SPARQL format. There are only a few training datasets available to train such a model. One is the LC-QuAD dataset. Hope this helps! 🙂 |
Beta Was this translation helpful? Give feedback.
-
Let me briefly summarize the main steps that we took to train a Text2SPARQL model on the Harry Potter Fandom data.
We also mapped some of the names so that they better fit what we use in the Fandom dataset and our graph, e.g., "country_of_citizenship" -> "nationality" or "characters" -> "harrypotterrole". |
Beta Was this translation helpful? Give feedback.
Let me briefly summarize the main steps that we took to train a Text2SPARQL model on the Harry Potter Fandom data.