Skip to content

Gathers machine learning and Tensorflow deep learning models for NLP problems

License

Notifications You must be signed in to change notification settings

dwykat/NLP-Models-Tensorflow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

logo

MIT License


NLP-Models-Tensorflow, Gathers machine learning and tensorflow deep learning models for NLP problems, code simplify inside Jupyter Notebooks 100%.

Table of contents

Objective

Original implementations are quite complex and not really beginner friendly. So I tried to simplify most of it. Also, there are tons of not-yet release papers implementation. So feel free to use it for your own research!

I will attached github repositories for models that I not implemented from scratch, basically I copy, paste and fix those code for deprecated issues.

Tensorflow version

Tensorflow version 1.10 and above only, not included 2.X version.

pip install -r requirements.txt

Contents

Trained on India news.

Accuracy based on 10 epochs only, calculated using word positions.

  1. LSTM Seq2Seq using topic modelling, test accuracy 13.22%
  2. LSTM Seq2Seq + Luong Attention using topic modelling, test accuracy 12.39%
  3. LSTM Seq2Seq + Beam Decoder using topic modelling, test accuracy 10.67%
  4. LSTM Bidirectional + Luong Attention + Beam Decoder using topic modelling, test accuracy 8.29%
  5. Pointer-Generator + Bahdanau, https://github.com/xueyouluo/my_seq2seq, test accuracy 15.51%
  6. Copynet, test accuracy 11.15%
  7. Pointer-Generator + Luong, https://github.com/xueyouluo/my_seq2seq, test accuracy 16.51%
  8. Dilated Seq2Seq, test accuracy 10.88%
  9. Dilated Seq2Seq + Self Attention, test accuracy 11.54%
  10. BERT + Dilated CNN Seq2seq, test accuracy 13.5%
  11. self-attention + Pointer-Generator, test accuracy 4.34%
  12. Dilated-CNN Seq2seq + Pointer-Generator, test accuracy 5.57%

Trained on Cornell Movie Dialog corpus, accuracy table in chatbot.

  1. Seq2Seq-manual
  2. Seq2Seq-API Greedy
  3. Bidirectional Seq2Seq-manual
  4. Bidirectional Seq2Seq-API Greedy
  5. Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  6. Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  7. Bytenet
  8. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  9. End-to-End Memory Network
  10. Attention is All you need
  11. Transformer-XL + LSTM
  12. GPT-2 + LSTM
  13. Tacotron + Beam decoder
Complete list (54 notebooks)
  1. Basic cell Seq2Seq-manual
  2. LSTM Seq2Seq-manual
  3. GRU Seq2Seq-manual
  4. Basic cell Seq2Seq-API Greedy
  5. LSTM Seq2Seq-API Greedy
  6. GRU Seq2Seq-API Greedy
  7. Basic cell Bidirectional Seq2Seq-manual
  8. LSTM Bidirectional Seq2Seq-manual
  9. GRU Bidirectional Seq2Seq-manual
  10. Basic cell Bidirectional Seq2Seq-API Greedy
  11. LSTM Bidirectional Seq2Seq-API Greedy
  12. GRU Bidirectional Seq2Seq-API Greedy
  13. Basic cell Seq2Seq-manual + Luong Attention
  14. LSTM Seq2Seq-manual + Luong Attention
  15. GRU Seq2Seq-manual + Luong Attention
  16. Basic cell Seq2Seq-manual + Bahdanau Attention
  17. LSTM Seq2Seq-manual + Bahdanau Attention
  18. GRU Seq2Seq-manual + Bahdanau Attention
  19. LSTM Bidirectional Seq2Seq-manual + Luong Attention
  20. GRU Bidirectional Seq2Seq-manual + Luong Attention
  21. LSTM Bidirectional Seq2Seq-manual + Bahdanau Attention
  22. GRU Bidirectional Seq2Seq-manual + Bahdanau Attention
  23. LSTM Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  24. GRU Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  25. LSTM Seq2Seq-API Greedy + Luong Attention
  26. GRU Seq2Seq-API Greedy + Luong Attention
  27. LSTM Seq2Seq-API Greedy + Bahdanau Attention
  28. GRU Seq2Seq-API Greedy + Bahdanau Attention
  29. LSTM Seq2Seq-API Beam Decoder
  30. GRU Seq2Seq-API Beam Decoder
  31. LSTM Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  32. GRU Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  33. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  34. GRU Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  35. Bytenet
  36. LSTM Seq2Seq + tf.estimator
  37. Capsule layers + LSTM Seq2Seq-API Greedy
  38. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  39. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder + Dropout + L2
  40. DNC Seq2Seq
  41. LSTM Bidirectional Seq2Seq-API + Luong Monotic Attention + Beam Decoder
  42. LSTM Bidirectional Seq2Seq-API + Bahdanau Monotic Attention + Beam Decoder
  43. End-to-End Memory Network + Basic cell
  44. End-to-End Memory Network + LSTM cell
  45. Attention is all you need
  46. Transformer-XL
  47. Attention is all you need + Beam Search
  48. Transformer-XL + LSTM
  49. GPT-2 + LSTM
  50. CNN Seq2seq
  51. Conv-Encoder + LSTM
  52. Tacotron + Greedy decoder
  53. Tacotron + Beam decoder
  54. Google NMT

Trained on CONLL English Dependency. Train set to train, dev and test sets to test.

Stackpointer and Biaffine-attention originally from https://github.com/XuezheMax/NeuroNLP2 written in Pytorch.

Accuracy based on arc, types and root accuracies after 15 epochs only.

  1. Bidirectional RNN + CRF + Biaffine, arc accuracy 70.48%, types accuracy 65.18%, root accuracy 66.4%
  2. Bidirectional RNN + Bahdanau + CRF + Biaffine, arc accuracy 70.82%, types accuracy 65.33%, root accuracy 66.77%
  3. Bidirectional RNN + Luong + CRF + Biaffine, arc accuracy 71.22%, types accuracy 65.73%, root accuracy 67.23%
  4. BERT Base + CRF + Biaffine, arc accuracy 64.30%, types accuracy 62.89%, root accuracy 74.19%
  5. Bidirectional RNN + Biaffine Attention + Cross Entropy, arc accuracy 72.42%, types accuracy 63.53%, root accuracy 68.51%
  6. BERT Base + Biaffine Attention + Cross Entropy, arc accuracy 72.85%, types accuracy 67.11%, root accuracy 73.93%
  7. Bidirectional RNN + Stackpointer, arc accuracy 61.88%, types accuracy 48.20%, root accuracy 89.39%
  8. XLNET Base + Biaffine Attention + Cross Entropy, arc accuracy 74.41%, types accuracy 71.37%, root accuracy 73.17%

Trained on CONLL NER.

  1. Bidirectional RNN + CRF, test accuracy 96%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 93%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 95%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 69%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 90%
  8. BERT, test accuracy 99%
  9. XLNET-Base, test accuracy 99%

Trained on CNN News dataset.

Accuracy based on ROUGE-2.

  1. LSTM RNN, test accuracy 16.13%
  2. Dilated-CNN, test accuracy 15.54%
  3. Multihead Attention, test accuracy 26.33%
  4. BERT-Base

Trained on Shakespeare dataset.

  1. Character-wise RNN + LSTM
  2. Character-wise RNN + Beam search
  3. Character-wise RNN + LSTM + Embedding
  4. Word-wise RNN + LSTM
  5. Word-wise RNN + LSTM + Embedding
  6. Character-wise + Seq2Seq + GRU
  7. Word-wise + Seq2Seq + GRU
  8. Character-wise RNN + LSTM + Bahdanau Attention
  9. Character-wise RNN + LSTM + Luong Attention
  10. Word-wise + Seq2Seq + GRU + Beam
  11. Character-wise + Seq2Seq + GRU + Bahdanau Attention
  12. Word-wise + Seq2Seq + GRU + Bahdanau Attention
  13. Character-wise Dilated CNN + Beam search
  14. Transformer + Beam search
  15. Transformer XL + Beam search

Trained on Tatoeba dataset.

  1. Fast-text Char N-Grams

Trained on English-Vietnam, accuracy table in neural-machine-translation.

  1. Bytenet
  2. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  3. End-to-End Memory Network
  4. Attention is All you need
  5. Conv Seq2Seq
  6. BERT + Transformer Decoder
  7. XLNET + Transformer Decoder
Complete list (50 notebooks)
  1. basic-seq2seq-manual
  2. lstm-seq2seq-manual
  3. gru-seq2seq-manual
  4. basic-seq2seq-api-greedy
  5. lstm-seq2seq-api-greedy
  6. gru-seq2seq-greedy
  7. basic-birnn-seq2seq-manual
  8. lstm-birnn-seq2seq-manual
  9. gru-birnn-seq2seq-manual
  10. basic-birnn-seq2seq-greedy
  11. lstm-birnn-seq2seq-greedy
  12. gru-birnn-seq2seq-greedy
  13. basic-seq2seq-luong
  14. lstm-seq2seq-luong
  15. gru-seq2seq-luong
  16. basic-seq2seq-bahdanau
  17. lstm-seq2seq-bahdanau
  18. gru-seq2seq-bahdanau
  19. basic-birnn-seq2seq-bahdanau
  20. lstm-birnn-seq2seq-bahdanau
  21. gru-birnn-seq2seq-bahdanau
  22. basic-birnn-seq2seq-luong
  23. lstm-birnn-seq2seq-luong
  24. gru-birnn-seq2seq-luong
  25. lstm-seq2seq-contrib-greedy-luong
  26. gru-seq2seq-contrib-greedy-luong
  27. lstm-seq2seq-contrib-greedy-bahdanau
  28. gru-seq2seq-contrib-greedy-bahdanau
  29. lstm-seq2seq-contrib-beam-bahdanau
  30. gru-seq2seq-contrib-beam-bahdanau
  31. lstm-birnn-seq2seq-contrib-beam-luong
  32. gru-birnn-seq2seq-contrib-beam-luong
  33. lstm-birnn-seq2seq-contrib-luong-bahdanau-beam
  34. gru-birnn-seq2seq-contrib-luong-bahdanau-beam
  35. bytenet-greedy
  36. capsule-lstm-seq2seq-contrib-greedy
  37. capsule-gru-seq2seq-contrib-greedy
  38. dnc-seq2seq-bahdanau-greedy
  39. dnc-seq2seq-luong-greedy
  40. lstm-birnn-seq2seq-beam-luongmonotic
  41. lstm-birnn-seq2seq-beam-bahdanaumonotic
  42. memory-network-lstm-seq2seq-contrib
  43. attention-is-all-you-need-beam
  44. conv-seq2seq
  45. conv-encoder-lstm-decoder
  46. dilated-conv-seq2seq
  47. gru-birnn-seq2seq-greedy-residual
  48. google-nmt
  49. bert-transformer-decoder-beam
  50. xlnet-base-transformer-decoder-beam
  1. CNN + LSTM RNN, test accuracy 100%
  2. Im2Latex, test accuracy 100%

Trained on CONLL POS.

  1. Bidirectional RNN + CRF, test accuracy 92%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 91%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 3%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 89%
  8. BERT, test accuracy 99%

Trained on bAbI Dataset.

  1. End-to-End Memory Network + Basic cell
  2. End-to-End Memory Network + GRU cell
  3. End-to-End Memory Network + LSTM cell
  4. Dynamic Memory

Trained on Cornell Movie--Dialogs Corpus

  1. BERT

Trained on Toronto speech dataset.

  1. Tacotron, https://github.com/Kyubyong/tacotron_asr, test accuracy 77.09%
  2. BiRNN LSTM, test accuracy 84.66%
  3. BiRNN Seq2Seq + Luong Attention + Cross Entropy, test accuracy 87.86%
  4. BiRNN Seq2Seq + Bahdanau Attention + Cross Entropy, test accuracy 89.28%
  5. BiRNN Seq2Seq + Bahdanau Attention + CTC, test accuracy 86.35%
  6. BiRNN Seq2Seq + Luong Attention + CTC, test accuracy 80.30%
  7. CNN RNN + Bahdanau Attention, test accuracy 80.23%
  8. Dilated CNN RNN, test accuracy 31.60%
  9. Wavenet, test accuracy 75.11%
  10. Deep Speech 2, test accuracy 81.40%
  11. Wav2Vec Transfer learning BiRNN LSTM, test accuracy 83.24%
  1. BERT-Base
  2. XLNET-Base
  3. BERT-Base Fast
  4. BERT-Base accurate

Trained on SQUAD Dataset.

  1. BERT,
{"exact_match": 77.57805108798486, "f1": 86.18327335287402}

Trained on English Lemmatization.

  1. LSTM + Seq2Seq + Beam
  2. GRU + Seq2Seq + Beam
  3. LSTM + BiRNN + Seq2Seq + Beam
  4. GRU + BiRNN + Seq2Seq + Beam
  5. DNC + Seq2Seq + Greedy
  6. BiRNN + Bahdanau + Copynet
  1. Pretrained Glove
  2. GRU VAE-seq2seq-beam TF-probability
  3. LSTM VAE-seq2seq-beam TF-probability
  4. GRU VAE-seq2seq-beam + Bahdanau Attention TF-probability
  5. VAE + Deterministic Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  6. VAE + VAE Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  7. BERT-Base + Nucleus Sampling
  8. XLNET-Base + Nucleus Sampling

Trained on English sentiment dataset, accuracy table in text-classification.

  1. Basic cell RNN
  2. Bidirectional RNN
  3. LSTM cell RNN
  4. GRU cell RNN
  5. LSTM RNN + Conv2D
  6. K-max Conv1d
  7. LSTM RNN + Conv1D + Highway
  8. LSTM RNN with Attention
  9. Neural Turing Machine
  10. BERT
  11. Dynamic Memory Network
  12. XLNET
  13. ALBERT
  14. GPT-2
Complete list (77 notebooks)
  1. Basic cell RNN
  2. Basic cell RNN + Hinge
  3. Basic cell RNN + Huber
  4. Basic cell Bidirectional RNN
  5. Basic cell Bidirectional RNN + Hinge
  6. Basic cell Bidirectional RNN + Huber
  7. LSTM cell RNN
  8. LSTM cell RNN + Hinge
  9. LSTM cell RNN + Huber
  10. LSTM cell Bidirectional RNN
  11. LSTM cell Bidirectional RNN + Huber
  12. LSTM cell RNN + Dropout + L2
  13. GRU cell RNN
  14. GRU cell RNN + Hinge
  15. GRU cell RNN + Huber
  16. GRU cell Bidirectional RNN
  17. GRU cell Bidirectional RNN + Hinge
  18. GRU cell Bidirectional RNN + Huber
  19. LSTM RNN + Conv2D
  20. K-max Conv1d
  21. LSTM RNN + Conv1D + Highway
  22. LSTM RNN + Basic Attention
  23. LSTM Dilated RNN
  24. Layer-Norm LSTM cell RNN
  25. Only Attention Neural Network
  26. Multihead-Attention Neural Network
  27. Neural Turing Machine
  28. LSTM Seq2Seq
  29. LSTM Seq2Seq + Luong Attention
  30. LSTM Seq2Seq + Bahdanau Attention
  31. LSTM Seq2Seq + Beam Decoder
  32. LSTM Bidirectional Seq2Seq
  33. Pointer Net
  34. LSTM cell RNN + Bahdanau Attention
  35. LSTM cell RNN + Luong Attention
  36. LSTM cell RNN + Stack Bahdanau Luong Attention
  37. LSTM cell Bidirectional RNN + backward Bahdanau + forward Luong
  38. Bytenet
  39. Fast-slow LSTM
  40. Siamese Network
  41. LSTM Seq2Seq + tf.estimator
  42. Capsule layers + RNN LSTM
  43. Capsule layers + LSTM Seq2Seq
  44. Capsule layers + LSTM Bidirectional Seq2Seq
  45. Nested LSTM
  46. LSTM Seq2Seq + Highway
  47. Triplet loss + LSTM
  48. DNC (Differentiable Neural Computer)
  49. ConvLSTM
  50. Temporal Convd Net
  51. Batch-all Triplet-loss + LSTM
  52. Fast-text
  53. Gated Convolution Network
  54. Simple Recurrent Unit
  55. LSTM Hierarchical Attention Network
  56. Bidirectional Transformers
  57. Dynamic Memory Network
  58. Entity Network
  59. End-to-End Memory Network
  60. BOW-Chars Deep sparse Network
  61. Residual Network using Atrous CNN
  62. Residual Network using Atrous CNN + Bahdanau Attention
  63. Deep pyramid CNN
  64. Transformer-XL
  65. Transfer learning GPT-2 345M
  66. Quasi-RNN
  67. Tacotron
  68. Slice GRU
  69. Slice GRU + Bahdanau
  70. Wavenet
  71. Transfer learning BERT Base
  72. Transfer learning XL-net Large
  73. LSTM BiRNN global Max and average pooling
  74. Transfer learning BERT Base drop 6 layers
  75. Transfer learning BERT Large drop 12 layers
  76. Transfer learning XL-net Base
  77. Transfer learning ALBERT

Trained on First Quora Dataset Release: Question Pairs.

  1. BiRNN + Contrastive loss, test accuracy 76.50%
  2. Dilated CNN + Contrastive loss, test accuracy 72.98%
  3. Transformer + Contrastive loss, test accuracy 73.48%
  4. Dilated CNN + Cross entropy, test accuracy 72.27%
  5. Transformer + Cross entropy, test accuracy 71.1%
  6. Transfer learning BERT base + Cross entropy, test accuracy 90%
  7. Transfer learning XLNET base + Cross entropy, test accuracy 77.39%

Trained on Toronto speech dataset.

  1. Tacotron, https://github.com/Kyubyong/tacotron
  2. CNN Seq2seq + Dilated CNN vocoder
  3. Seq2Seq + Bahdanau Attention
  4. Seq2Seq + Luong Attention
  5. Dilated CNN + Monothonic Attention + Dilated CNN vocoder
  6. Dilated CNN + Self Attention + Dilated CNN vocoder
  7. Deep CNN + Monothonic Attention + Dilated CNN vocoder
  8. Deep CNN + Self Attention + Dilated CNN vocoder

Trained on Malaysia news.

  1. TAT-LSTM
  2. TAV-LSTM
  3. MTA-LSTM
  4. Dilated CNN Seq2seq

Trained on English sentiment dataset.

  1. LDA2Vec
  2. BERT Attention
  3. XLNET Attention

Trained on random books.

  1. Skip-thought Vector
  2. Residual Network using Atrous CNN
  3. Residual Network using Atrous CNN + Bahdanau Attention

Trained on English sentiment dataset.

  1. Word Vector using CBOW sample softmax
  2. Word Vector using CBOW noise contrastive estimation
  3. Word Vector using skipgram sample softmax
  4. Word Vector using skipgram noise contrastive estimation
  5. Supervised Embedded
  6. Triplet-loss + LSTM
  7. LSTM Auto-Encoder
  8. Batch-All Triplet-loss LSTM
  9. Fast-text
  10. ELMO (biLM)
  11. Triplet-loss + BERT
  1. Attention heatmap on Bahdanau Attention
  2. Attention heatmap on Luong Attention
  3. BERT attention, https://github.com/hsm207/bert_attn_viz
  4. XLNET attention

Trained on Toronto speech dataset.

  1. Dilated CNN
  1. Bahdanau
  2. Luong
  3. Hierarchical
  4. Additive
  5. Soft
  6. Attention-over-Attention
  7. Bahdanau API
  8. Luong API
  1. Markov chatbot
  2. Decomposition summarization (3 notebooks)

About

Gathers machine learning and Tensorflow deep learning models for NLP problems

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.6%
  • Python 2.4%