Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
jmyerston committed Jun 9, 2023
2 parents 278c88a + a7559ee commit 855558b
Show file tree
Hide file tree
Showing 19 changed files with 431 additions and 145 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
assets
metrics
packages
training
Expand Down
80 changes: 60 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,35 @@
# greCy
## Ancient Greek models for spaCy

This spaCy project trains seven ancient Greek models using the Perseus and Proiel [Universal Dependency corpora](https://universaldependencies.org). Trained and already compiled wheel packages are already available on the [Hugging Face Hub](https://huggingface.co/Jacobo). Prior to installation, the models can be tested on my [Ancient Greek Syntax Analyzer](https://huggingface.co/spaces/Jacobo/syntax). In general the project gives priority to the Proiel training dataset as it is the corpus that produces more accurate and efficient models.
greCy is a set of spaCy ancient Greek models and its installer. The models were trained using the [Perseus](https://universaldependencies.org/treebanks/grc_perseus/index.html) and [Proiel UD](https://universaldependencies.org/treebanks/grc_proiel/index.html) corpora. Prior to installation, the models can be tested on my [Ancient Greek Syntax Analyzer](https://huggingface.co/spaces/Jacobo/syntax) on the [Hugginge Face Hub](https://huggingface.co/), where you can also check the various performance metrics of each model.

### Installation
In general, models trained with the Proiel corpus perform better in POS Tagging and Dependency Parsing, while Perseus models are better at sentence segmentation using punctuation, and Morphological Analysis. Lemmatization is similar across models because they share the same neural lemmatizer in two variants: the most accurate lemmatizer was trained with word vectors, and the other was not The best models for lemmatization are the large (_lg) models.

The models can be installed from the terminal with the commands below:
### Installation

**For the small model:**
First install the python package as usual:

``` bash
pip install -U grecy
```
pip install https://huggingface.co/Jacobo/grc_proiel_sm/resolve/main/grc_proiel_sm-any-py3-none-any.whl
```
**For the medium:**

```
pip install https://huggingface.co/Jacobo/grc_proiel_md/resolve/main/grc_proiel_md-any-py3-none-any.whl
```
**For the large:**
```
pip install https://huggingface.co/Jacobo/grc_perseus_lg/resolve/main/grc_perseus_lg-any-py3-none-any.whl
```
**For the transformer based:**
Once the package is successfully installed, you can proceed any of the followings models:

* grc_perseus_sm
* grc_proiel_sm
* grc_perseus_lg
* grc_proiel_lg
* grc_perseus_trf
* grc_proiel_trf


The models can be installed from the terminal with the commands below:

```
pip install https://huggingface.co/Jacobo/grc_proiel_trf/resolve/main/grc_proiel_trf-any-py3-none-any.whl
python -m grecy install MODEL
```
where you replace MODEL by any of the model names listed above. The suffixes after the corpus name _sm, _lg, and _trf indicate the size of the model which directly depend on the word embedding used to train the models. The smallest models end in _sm (small) and the less accurate ones: they are good for testing and building lightweight apps. The _lg and _trf are the large and transformers models which are more accurate. The _lg were trained using fasttext word vectors in the spaCy floret version, and the _trf models were using a special version of BERT, pertained by ourselves with the largest Ancient Greek corpus we could find (see more below). If you would like to work with word similarity, choose the _lg models. The vectors for large models were trained with the TLG corpus using [floret](https://github.com/explosion/floret), a fork of [fastText](https://fasttext.cc/).


### Loading

Expand All @@ -37,27 +41,63 @@ nlp = spacy.load("grc_proiel_XX")
```
Remember to replace _XX with the size of the model you would like to use, this means, _sm for small, _lg for large, and _trf for transformer. The _trf model is the most accurate but also the slowest.

If you would like to work with word vectors, choose the large models. The vectors for the large models were trained with the TLG corpus using [floret](https://github.com/explosion/floret), a fork of [fastText](https://fasttext.cc/).
### Use

spaCy is a powerful NLP library with many application. The most basic of its function is the morpho-syntantic annotation of texts for further processing. A common routine is to load a model, create a doc object, and process a text:

```
import spacy
nlp = spacy.load("grc_proiel_sm")
text = "καὶ πρὶν μὲν ἐν κακοῖσι κειμένην ὅμως ἐλπίς μʼ ἀεὶ προσῆγε σωθέντος τέκνου ἀλκήν τινʼ εὑρεῖν κἀπικούρησιν δόμον"
doc = nlp(text)
for token in doc:
print(f'{token.text}, lemma: {token.lemma_} pos:{token.pos_}')
```

#### The apostrophe issue

Unfortunaly, there is no consensus among the different internet projects that offer ancient Greek texts about how to represent the Ancient Greek apostrophe. Modern Greek simply uses the regular apostrophe, but ancient texts available in Perseus and Perseus under Philologic use various unicode characters for the apostrophe. Instead of the apostrophe, we find the Greek koronis, modifier letter apostrophe, and right single quotation mark. Provisionally, I have opted to use modifier letter apostrophe in the corpus with which I trained the models. This means, that if you want the greCy models to properly handle the apostrophe you have to make sure that the Ancient Greek texts that you are processing use the modifier letter apostrophe **ʼ** (U+02BC ). Otherwise the models will fail to lemmatize and tag some words in your texts that ends with an 'apostrophe'.

### Building

The four standard spaCy models (small, medium, large, and transformer) are built and packaged using the following commands:
I offer here the project file, I use to train the models in case you want to customize your models for your specific needs. The six standard spaCy models (small, large, and transformer) are built and packaged using the following commands:


1. python -m spacy project assets
2. python -m spacy project run all

### Performance

For a general comparison, I share here the metrics of the Proiel transformer grc_proiel_trf and grc_perseys_trf. These models use for fine-tuning a transformer that was specifically trained to be used with spaCy and, consequently, makes the model much smaller than the alternatives offered by Python nlp libraries such as Stanza and Trankit (for more information on the transformer model and how it was trained see [aristoBERTo](https://huggingface.co/Jacobo/aristoBERTo)). The greCy's _trf models outperform Stanza and Trankit in most metrics and have the advantage that their size is only ~430 MB vs. the 1.2 GB of the Trankit model trained with XLM Roberta. See table below:

The Proiel_trf model uses for fine-tuning a transformer that was specifically trained to be used with spaCy and, consequently, makes the model much smaller than the alternatives offered by Python nlp libraries like Stanza and Trankit (for more information on the transformer model and how it was trained see [AristoBERTo](https://huggingface.co/Jacobo/aristoBERTo)). The spaCy _trf model outperforms Stanza and Trankit in most metrics and has the advantage that its size is only 662 MB vs. the 1.2 GB of the Trankit model trained with XLM Roberta. For a comparison, see table below:
#### Proiel

| Library | Tokens | Sentences | UPOS | XPOS | UFeats |Lemmas |UAS |LAS |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| spaCy | 100 | 71.90 | 98.50 | 98.40 | 94.10 | 96.9 | 85.90 | 82.50 |
| spaCy | 100 | 71.74 | 98.11 | 98.21 | 93.91 | 96.69 | 85.59 | 82.30 |
| Trankit | 99.91 | 67.60 |97.86 | 97.93 |93.03 | 97.50 |85.63 |82.31 |
| Stanza | 100 | 51.65 | 97.38 | 97.75 | 92.09 | 97.42 | 80.34 |76.33 |

#### Perseus

| Library | Tokens | Sentences | UPOS | XPOS | UFeats |Lemmas |UAS |LAS |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| spaCy | 100 | 99.38 | 95.83 | 95.92 | 94.79 | 97.23 | 80.93 | 75.74 |
| Trankit | 99.71 | 98.70 |93.97 | 87.25 |91.66 | 88.52 |83.48 |78.56 |
| Stanza | 99.8 | 98.85 | 92.54 | 85.22 | 91.06 | 88.26 | 78.75 |73.35 |

### Caveat

Metrics, however, can be misleading. This becomes particularly obvious when you work with texts that are not part of the training and evaluation dataset. In addition, greCy's lemmatizers (in all sizes) exhibit lower benchmarks in comparison to the above mentioned nlp libraries, but they have a substantially larger vocabulary than the Stanza and Trankit models because they were trained with a complemental lemma corpus derived from Giussepe G.A. Celano [lemmatized corpus](https://github.com/gcelano/LemmatizedAncientGreekXML). This means that the greCy's lemmatizers perform better than Trankit and Stanza when processing texts not included in the Perseus and Proiel datasets.

### Future Developments

This project was initiated as part of the [Diogenet Project](https://diogenet.ucsd.edu/), a research initiative that focuses on the automatic extraction of social relations from Ancient Greek texts. As part of this project, greCy will add first, in a non distant future, a NER pipeline for the identification of entities; later I hope also to offer pipeline for the extraction of social relation from Greek texts. This pipeline should contribute to the study of social networks in the ancient world.



Metrics, however, can be misleading. This becomes particularly obvious when you work with texts that are not part of the training dataset. In addition, greCy's lemmatizers (in all sizes) exhibit lower benchmarks but have a substantially larger vocabulary than the Stanza and Trankit models because they were trained with a complemental lemma corpus derived from Giussepe G.A. Celano [lemmatized corpus](https://github.com/gcelano/LemmatizedAncientGreekXML). This means that the greCy's lemmatizers perform better than Trankit and Stanza when processing texts not included in the Perseus and Proiel datasets.
Expand Down
1 change: 1 addition & 0 deletions assets/UD_Ancient_Greek-PROIEL
Submodule UD_Ancient_Greek-PROIEL added at 861544
1 change: 1 addition & 0 deletions assets/UD_Ancient_Greek-Perseus
Submodule UD_Ancient_Greek-Perseus added at ef8662
30 changes: 13 additions & 17 deletions configs/large.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ seed = 0

[nlp]
lang = "grc"
pipeline = ["tok2vec","morphologizer","tagger","parser","senter","lemmatizer","attribute_ruler"]
pipeline = ["tok2vec","morphologizer","tagger","parser","lemmatizer","attribute_ruler"]
batch_size = 128
disabled = []
before_creation = null
Expand All @@ -25,7 +25,6 @@ tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
source = "./training/lemmatizer/large/model-best"
replace_listeners = ["model.tok2vec"]


[components.attribute_ruler]
factory = "attribute_ruler"
scorer = {"@scorers":"spacy.attribute_ruler_scorer.v1"}
Expand Down Expand Up @@ -68,9 +67,6 @@ nO = null
width = ${components.tok2vec.model.encode.width}
upstream = "tok2vec"

[components.senter]
source = "./training/senter/large/model-best"

[components.tagger]
factory = "tagger"
overwrite = false
Expand Down Expand Up @@ -143,7 +139,7 @@ patience = 5000
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = ["lemmatizer","senter"]
frozen_components = ["lemmatizer"]
annotating_components = []
before_to_disk = null

Expand All @@ -161,18 +157,18 @@ compound = 1.001
t = 0.0


# [training.logger]
# @loggers = "spacy.WandbLogger.v3"
# project_name = "proiel"
# remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
# log_dataset_dir = "./corpus"
# model_log_interval = 1000
# entity = null
# run_name = null

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
@loggers = "spacy.WandbLogger.v3"
project_name = "greCy"
remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
log_dataset_dir = "./corpus"
model_log_interval = 1000
entity = null
run_name = null

# [training.logger]
# @loggers = "spacy.ConsoleLogger.v1"
# progress_bar = false

[training.optimizer]
@optimizers = "Adam.v1"
Expand Down
24 changes: 12 additions & 12 deletions configs/lemmatizer_sm.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ seed = 0
[nlp]
lang = "grc"
pipeline = ["lemmatizer"]
batch_size = 32
batch_size = 64
disabled = []
before_creation = null
after_creation = null
Expand Down Expand Up @@ -109,18 +109,18 @@ stop = 1000
compound = 1.001
t = 0.0

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

# [training.logger]
# @loggers = "spacy.WandbLogger.v3"
# project_name = "lemmatizer"
# remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
# log_dataset_dir = "./corpus"
# model_log_interval = 1000
# entity = null
# run_name = null
# @loggers = "spacy.ConsoleLogger.v1"
# progress_bar = false

[training.logger]
@loggers = "spacy.WandbLogger.v3"
project_name = "lemmatizer"
remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
log_dataset_dir = "./corpus"
model_log_interval = 1000
entity = null
run_name = null

[training.optimizer]
@optimizers = "Adam.v1"
Expand Down
24 changes: 12 additions & 12 deletions configs/lemmatizer_trf.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -102,18 +102,18 @@ size = 2000
buffer = 256
get_length = null

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

# [training.logger]
# @loggers = "spacy.WandbLogger.v3"
# project_name = "lemmatizer"
# remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
# log_dataset_dir = "./corpus"
# model_log_interval = 1000
# entity = null
# run_name = null
# @loggers = "spacy.ConsoleLogger.v1"
# progress_bar = false

[training.logger]
@loggers = "spacy.WandbLogger.v3"
project_name = "lemmatizer"
remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
log_dataset_dir = "./corpus"
model_log_interval = 1000
entity = null
run_name = null

[training.optimizer]
@optimizers = "Adam.v1"
Expand Down Expand Up @@ -146,4 +146,4 @@ after_init = null

[initialize.components]

[initialize.tokenizer]
[initialize.tokenizer]
22 changes: 11 additions & 11 deletions configs/lemmatizer_vec.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ seed = 0
[nlp]
lang = "grc"
pipeline = ["lemmatizer"]
batch_size = 32
batch_size = 64
disabled = []
before_creation = null
after_creation = null
Expand Down Expand Up @@ -109,17 +109,17 @@ stop = 1000
compound = 1.001
t = 0.0

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

# [training.logger]
# @loggers = "spacy.WandbLogger.v3"
# project_name = "lemmatizer"
# remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
# log_dataset_dir = "./corpus"
# model_log_interval = 1000
# entity = null
# @loggers = "spacy.ConsoleLogger.v1"
# progress_bar = false

[training.logger]
@loggers = "spacy.WandbLogger.v3"
project_name = "lemmatizer"
remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
log_dataset_dir = "./corpus"
model_log_interval = 1000
entity = null
run_name = null

[training.optimizer]
Expand Down
29 changes: 13 additions & 16 deletions configs/small.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ seed = 0

[nlp]
lang = "grc"
pipeline = ["tok2vec","morphologizer","tagger","parser","senter","lemmatizer","attribute_ruler"]
pipeline = ["tok2vec","morphologizer","tagger","parser","lemmatizer","attribute_ruler"]
batch_size = 128
disabled = []
before_creation = null
Expand All @@ -23,9 +23,6 @@ tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components.lemmatizer]
source = "./training/lemmatizer/small/model-best"

[components.senter]
source = "./training/senter/small/model-best"

[components.attribute_ruler]
factory = "attribute_ruler"
scorer = {"@scorers":"spacy.attribute_ruler_scorer.v1"}
Expand Down Expand Up @@ -134,7 +131,7 @@ patience = 5000
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = ["lemmatizer","senter"]
frozen_components = ["lemmatizer"]
annotating_components = []
before_to_disk = null

Expand All @@ -151,18 +148,18 @@ stop = 1000
compound = 1.001
t = 0.0

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

# [training.logger]
# @loggers = "spacy.WandbLogger.v3"
# project_name = "proiel"
# remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
# log_dataset_dir = "./corpus"
# model_log_interval = 1000
# entity = null
# run_name = null
# @loggers = "spacy.ConsoleLogger.v1"
# progress_bar = false

[training.logger]
@loggers = "spacy.WandbLogger.v3"
project_name = "greCy"
remove_config_values = ["paths.train","paths.dev","corpora.train.path","corpora.dev.path"]
log_dataset_dir = "./corpus"
model_log_interval = 1000
entity = null
run_name = null

[training.optimizer]
@optimizers = "Adam.v1"
Expand Down
Loading

0 comments on commit 855558b

Please sign in to comment.