Skip to content

chantelle-dsilva/POS-tagging

Repository files navigation

CSE 628 Spring 2018
Assignment 2:  Part of Speech Tagging


1. DATA 

This assignment is about part-of-speech tagging on Twitter data. 

The data is located in ./data directory with a train and dev split. The test data is also included, but with false POS tags on purpose. 
You will develop and tune your models only using train and dev sets, and will generate predictions for the test data once you are done developing. 
The accuracy will be computed by TA with the goldstandard labels. 

This data set contains tweets annotated with their universal parts-of-speech tags, with 379 tweets for training and 112 for dev, and 12 possible part-of-speech labels. The test corpus will contain 295 tweets.

The format of the data files is pretty straight forward. It contains a line for each token (with its label separated by a whitespace), and with sentences separated with empty line. See the below example an example, and examine the text files yourself (always a good idea).

------------------------------
@paulwalk X
It  PRON
's  VERB
the DET
view  NOUN
from  ADP
where ADV
I PRON
'm  VERB
living  VERB
for ADP
two NUM
weeks NOUN
. .
Empire  NOUN
State NOUN
Building  NOUN
= X
ESB NOUN
. .
Pretty  ADV
bad ADJ
storm NOUN
here  ADV
last  ADJ
evening NOUN
------------------------------




2. Files

- data.py: The primary entry point that reads the data, and trains and evaluates the tagger implementation.

	usage: python data.py [-h] [-m MODEL] [--test]

	optional arguments:
	  -h, --help            show this help message and exit
	  -m MODEL, --model MODEL
	                        'LR'/'lr' for logistic regression tagger
	                        'CRF'/'crf' for conditional random field tagger
	  --test                Make predictions for test dataset


- tagger.py: Code for two sequence taggers, logistic regression and CRF. Both of these taggers rely on 'feats.py' and 'feat_gen.py' to compute the features for each token. The CRF tagger also relies on 'viterbi.py' to decode (which is currently incorrect), and on 'struct_perceptron.py' for the training algorithm (which also needs Viterbi to be working).

- feats.py & 'feat_gen.py: Code to compute, index, and maintain the token features. The primary purpose of 'feats.py' is to map the boolean features computed in 'feats_gen.py' to integers, and do the reverse mapping (if you want to know the name of a feature from its index). 'feats_gen.py' is used to compute the features of a token in a sentence, which you will be extending. The method there returns the computed features for a token as a list of string (so does not have to worry about indices, etc.).

- 'struct_perceptron.py': A direct port (with negligible changes) of the structured perceptron trainer from the 'pystruct' project. Only used for the CRF tagger. The description of the various hyperparameters of the trainer are available here, but you should change them from the constructor in 'tagger.py'.

- 'viterbi.py' (and 'viterbi_test.py'): General purpose interface to a sequence Viterbi decoder in 'viterbi.py', which currently has an incorrect implementation. Once you have implemented the Viterbi implementation, running 'python viterbi_test.py' should result in succesful execution without any exceptions.

- conlleval.pl: This is the official evaluation script for the CONLL evaluation.
  Although it computes the same metrics as the python code does, it supports a bunch of features, such as: 
  	(a) Latex formatted tables, by using -l, 
  	(b) BIO annotation by default, turned off using -r.
  In particular, when evaluating the output prediction files (~.pred) for POS tagging, 

  $ ./conlleval.pl -r -d \\t < ./predictions/twitter_dev.pos.pred

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published