Skip to content

GSOC2013_Progress_Kasun

Kasun Perera edited this page Aug 13, 2013 · 37 revisions

Proposal

Type inference to extend coverage Project Proposal

Sources for type inference. The list is based on the comments by Aleksander Pohl on the project proposal

Project Updates

Warm Up period (27th May - 16th June)

  • Setup the public clone of Extraction Framework
  • Setting up extraction-framework code on IDEA IDE, Building the code ect.
  • Working on the Issue#33
  • Familiarize with Scala, git, and IDEA

Week 1 (17th June- 23rd June)

Week 2 (24th June- 30th June)

  • Identify Wikipedia leaf categories #Issue16Investigate on YAGO approach, read YAGO paper again
  • Mail discuss tread on choosing the source data for leaf category identification Link to mail tread
  • Method of leaf category identification
  1. get all parent categories
  2. get all child categories
  3. substitute "1" from "2" result is the all leaf categories.
  • Processing Wikipedia categories #issue17 Save parent-child relationship of the categories to a MySQL database in-order to address the requirement of the #issue17

  • Created tables

  • Node Table


CREATE TABLE IF NOT EXISTS node ( node_id int(10) NOT NULL AUTO_INCREMENT, category_name varchar(40) NOT NULL, is_leaf tinyint(1) NOT NULL, is_prominent tinyint(1) NOT NULL, score_interlang double DEFAULT NULL, score_edit_histo double NOT NULL, PRIMARY KEY (node_id), UNIQUE KEY category_name (category_name) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

  • Edge Table

CREATE TABLE IF NOT EXISTS edges ( parent_id int(10) NOT NULL, child_id int(10) NOT NULL, PRIMARY KEY (parent_id,child_id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Week 3 (1st July- 7th July)

The leaf node detection, finding parent-child relationship approach mentioned in the 2nd week was abandoned due to following reasons.

  • "categories that don't have a broader category are not included in skos_categories dump"
    evidence for this claim is discussed here 1-issue#16, 2- Mail Archive
  • data freshness issues- since Dbpedia dumps nearly 1 year old and unavailability of synchronized sub-dumps for data analyze

New Approach Wikipedia Category table Category (cat_id, cat_title,cat_pages,cat_subcats,cat_files,cat_hidden)

cat_pages - excludes pages in subcategories, but it contains the count of other pages like talk: pages, template: pages ect. with actual article pages Need to find out a way to filter out unnecessary pages from the these statistics.

Some hints about categories usage

  • Some of the selected categories have cat_pages=0; i.e. these categories are not used
  • Some of the selected categories have cat_pages> 10000 ;Which are possibly administrative categories or higher nodes of the category graph.
  • When cat_subcats=0, which will get all categories that don’t have subcategories.

Use of Category table for Selection of leaf nodes

A query such as below would be used to find possible leaf node candidates, given the optimum “threshold” SELECT * FROM category WHERE cat_subcats=0 AND cat_pages>0 AND cat_pages<threshold ";

Here is my threshold calculations. This shows the threshold values and count of categories having less pages than the threshold value. (adhering to the above SQL query)

A suitable threshold value need to be selected.

More details on using Wikipedia Category and Categorylinks SQL dumps is drafted [here] (https://docs.google.com/document/d/1kXhaQu4UrEKX-v1DPwC6V2Sk9SNTDIwvgDtOZX5bZgk/edit?usp=sharing)

Week 4 (8th July- 14th July)

Identification on which are the administrative categories and how they are distributed according to 'cat_pages'

Category (cat_id, cat_title,cat_pages,cat_subcats,cat_files,cat_hidden) cat_pages - excludes pages in subcategories, but it contains the count of other pages like talk: pages, template: pages ect. with actual article pages

Find out a way to filter out unnecessary pages(i.e. talk, help ect) from the these tables.

  1. Select all pages from “page” table where page_namespace=0 this would get us all article pages with their ID’s

  2. Then from the categorylinks table select entries “cl_from” for selected page_ID’s in step 1) – this would give the categories only related to Actual pages

  3. Use those selected categories in step 2) to select leafnode candidates from the category table using below query. SELECT * FROM category WHERE cat_subcats=0 AND cat_pages>0 AND cat_pages<threshold ";

  4. Then obtain parents of leaf nodes from the “categorylinks” table

YAGO approach recap

Lessons learned by analyzing category system

Week 5 (15th July- 21st July)

Tried to export Wikipedia dumps in to Mysql database in local computer- this takes huge amount of time since Wikipedia dumps have larger dump files (Categorylinks dump ~8.5GB, Page dump ~2.5GB), Still import process is going on.

Moved to lucene based approach for implementing algorithm mentioned in week 4

  1. Implement to lucene code for indexing and searching page dump for "select pages where page_namespace=0" (1st point of the algorithm mention in week 4) DONE

  2. Implement to lucene code for indexing and searching categorylinks dump for "select entries “cl_from” for selected page_ID’s in step 1)". Filter duplicate categorynames (“cl_from”) -done

  3. Running query SELECT * FROM category WHERE cat_subcats=0 AND cat_pages>0 AND cat_pages<threshold "; for selected category'ids in step 2) - IN PROGRESS

Week 6 (22nd July- 28th July)

New ideas for Identifying prominent categories, that emerged from the Skype call

Wikipedia template rendering engines automatically creates consistent amount of leaf conceptual categories. The Set of categories generated upon templates can be extracted by querying the 'page' SQL dump.

    Here is an example to fix the idea (may not actually apply): Bill clinton uses the 'Persondata' template [1]. The rendering engine automatically creates the '1946 births' category given the 'DATE OF BIRTH' template property value.

But need more clarification on how to select such data?

'traverse back to the Nth parent', to identify prominent nodes rather than traversing only to the first parent. The reasons are by only traversing to first parent, the amount of conceptual categories getting could be large.

DBpedia resource clustering can be done by directly analyzing DBpedia data, since the category extractor already links each resource to all Wikipedia categories. After prominent leaves obtained, it can intersect them with category data.

Implementation/modification of the prominent node detection algorithm. However this algorithm is need to be tested also need to be extended to handling traverse back to the n-th parent

    FOR EACH leaf node
    FOR EACH parent node 
        {
         traverse back to the parent
         check whether all children are leaf nodes
         IF all the children are leaf nodes
         THEN group all children into a cluster and make it a prominent node
        }
    IF none of the parent node is a prominent node
    THEN make the leaf node a prominent node

Week 7 (29th July- 05th August) (Mid Term Evaluation Week)

Running/ Calculation/ filtering of leaf node candidates

Week 8 (06th August- 12th August)

Filtering of categories based on following heuristic Heuristic- Categories having more than certain number of pages are not actual categories, i.e. they are administrative or some other type. (e.g. Categories having more than 100 pages are not possibly actual categories)

Following query is executed on the categories that are having more than 0 category pages

SELECT COUNT(*) FROM page_category WHERE cat_subcats=0 AND cat_pages< Threshold

where Threshold varied from 1 to 1000, making cat_subcats=0 enable to select categories that don't have sub categories, i.e. leaf node categories

Calculation Satistics Obtained following graph through calculations No of Pages vs No of Categories

This shows how many categories have given number of pages, e.g. there are 341446 categories having less than 10 pages.

A proper Threshold is needed to be selected to justify the above heuristic.

Once leaf node candidates selected, calculation of parent-child relationship is done by this method.

  1. Obtain leafnode category names(CATEGORY_NAME) from (3 , get the page_id for that category from the “page” table (SELECT page_id FROM page WHERE page_title=”CATEGORY_NAME” AND page_namespace=14)

  2. Use that category page_id to get the parent of that category from categorylinks table (SELECT cl_to FROM categorylinks WHERE cl_from=”category page_id”

parent-child relationship is obtained