Replies: 2 comments
-
Not sure whether I've understood your questions correctly but If you are generating extracts from Wikipedia pages, what about (Though Xamarin Workbook is dead now. Bummer.) WCL is not supporting SPARQL. If you'd like to query Wikidata with SPARQL, you might check out dotNetRDF packages. You may consider do SPARQL query first, retrieve the Wikidata entity URIs, and then convert (or truncate) them back to the entity IDs. Then you should have sufficient information to retrieve the sitelinks with Wikibase APIs (which is achievable with Alternatively, considering sitelinks are also kept in the Wikibase RDF dumps, you can try to extract the Wikipedia site links from RDF query results directly. Then you can truncate the page URL prefix to retrieve the article name. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed answer! I got it working with regular REST calls here: I don't like the string nature of the query to wikidata, so I'll check the RDF package you mentioned. I also couldn't figure out how to use wmapi properly in the query, but the current data I get is enough for me at this point. THANKS! |
Beta Was this translation helpful? Give feedback.
-
Hi,
After an interesting discussion in an open street map forum I got the impression that wiki data might be more beneficial to my needs instead of wikipedia pages.
So I'm looking to convert my code which runs a spatial query in wikipedia en and wikipedia he on Israel to query wikidata.
After the spatial query I want to extract the information in a few languages from the wikidata page - I need the pages titles and the the pages extracts for each language.
Is there a good way to do it using the code in this library?
I found this SO question, but I have no idea how to covert the code there to be used with this library:
https://stackoverflow.com/questions/63540429/how-to-get-description-from-wikidata-entry-with-sparql-query
Beta Was this translation helpful? Give feedback.
All reactions