From c9a954d82e8eb48e7fd9f3e9f6806efe89c595df Mon Sep 17 00:00:00 2001 From: cclarke411 Date: Wed, 24 Jul 2019 12:44:37 -0400 Subject: [PATCH] Add files via upload --- .../Unit-4/Lesson 4/Unsupervised_NN_NLP.ipynb | 504 ++++++++++++++++++ 1 file changed, 504 insertions(+) create mode 100644 Data Science Bootcamp/Unit-4/Lesson 4/Unsupervised_NN_NLP.ipynb diff --git a/Data Science Bootcamp/Unit-4/Lesson 4/Unsupervised_NN_NLP.ipynb b/Data Science Bootcamp/Unit-4/Lesson 4/Unsupervised_NN_NLP.ipynb new file mode 100644 index 0000000..2373302 --- /dev/null +++ b/Data Science Bootcamp/Unit-4/Lesson 4/Unsupervised_NN_NLP.ipynb @@ -0,0 +1,504 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[nltk_data] Downloading package gutenberg to\n", + "[nltk_data] C:\\Users\\clyde\\AppData\\Roaming\\nltk_data...\n", + "[nltk_data] Package gutenberg is already up-to-date!\n" + ] + }, + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "%matplotlib inline\n", + "import numpy as np\n", + "import pandas as pd\n", + "import scipy\n", + "import sklearn\n", + "import spacy\n", + "import matplotlib.pyplot as plt\n", + "import seaborn as sns\n", + "import re\n", + "from nltk.corpus import gutenberg, stopwords\n", + "import nltk\n", + "nltk.download('gutenberg')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "source": [ + "## Intro to word2vec\n", + "\n", + "The most common unsupervised neural network approach for NLP is word2vec, a shallow neural network model for converting words to vectors using distributed representation: Each word is represented by many neurons, and each neuron is involved in representing many words. At the highest level of abstraction, word2vec assigns a vector of random values to each word. For a word W, it looks at the words that are near W in the sentence, and shifts the values in the word vectors such that the vectors for words near that W are closer to the W vector, and vectors for words not near W are farther away from the W vector. With a large enough corpus, this will eventually result in words that often appear together having vectors that are near one another, and words that rarely or never appear together having vectors that are far away from each other. Then, using the vectors, similarity scores can be computed for each pair of words by taking the cosine of the vectors. \n", + "\n", + "This may sound quite similar to the Latent Semantic Analysis approach you just learned. The conceptual difference is that LSA creates vector representations of sentences based on the words in them, while word2vec creates representations of individual words, based on the words around them." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## What is it good for?\n", + "\n", + "Word2vec is useful for any time when computers need to parse requests written by humans. The problem with human communication is that there are so many different ways to communicate the same concept. It's easy for us, as humans, to know that \"the silverware\" and \"the utensils\" can refer to the same thing. Computers can't do that unless we teach them, and this can be a real chokepoint for human/computer interactions. If you've ever played a text adventure game (think _Colossal Cave Adventure_ or _Zork_), you may have encountered the following scenario:" + ] + }, + { + "cell_type": "raw", + "metadata": {}, + "source": [ + "GAME: You are on a forest path north of the field. A cave leads into a granite butte to the north.\n", + "A thick hedge blocks the way to the west.\n", + "A hefty stick lies on the ground.\n", + "\n", + "YOU: pick up stick \n", + "\n", + "GAME: You don't know how to do that. \n", + "\n", + "YOU: lift stick \n", + "\n", + "GAME: You don't know how to do that. \n", + "\n", + "YOU: take stick \n", + "\n", + "GAME: You don't know how to do that. \n", + "\n", + "YOU: grab stick \n", + "\n", + "GAME: You grab the stick from the ground and put it in your bag. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And your brain explodes from frustration. A text adventure game that incorporates a properly trained word2vec model would have vectors for \"pick up\", \"lift\", and \"take\" that are close to the vector for \"grab\" and therefore could accept those other verbs as synonyms so you could move ahead faster. In more practical applications, word2vec and other similar algorithms are what help a search engine return the best results for your query and not just the ones that contain the exact words you used. In fact, search is a better example, because not only does the search engine need to understand your request, it also needs to match it to web pages that were _also written by humans_ and therefore _also use idiosyncratic language_.\n", + "\n", + "Humans, man. \n", + "\n", + "So how does it work?\n", + "\n", + "## Generating vectors: Multiple algorithms\n", + "\n", + "In considering the relationship between a word and its surrounding words, word2vec has two options that are the inverse of one another:\n", + "\n", + " * _Continuous Bag of Words_ (CBOW): the identity of a word is predicted using the words near it in a sentence.\n", + " * _Skip-gram_: The identities of words are predicted from the word they surround. Skip-gram seems to work better for larger corpuses.\n", + "\n", + "For the sentence \"Terry Gilliam is a better comedian than a director\", if we focus on the word \"comedian\" then CBOW will try to predict \"comedian\" using \"is\", \"a\", \"better\", \"than\", \"a\", and \"director\". Skip-gram will try to predict \"is\", \"a\", \"better\", \"than\", \"a\", and \"director\" using the word \"comedian\". In practice, for CBOW the vector for \"comedian\" will be pulled closer to the other words, while for skip-gram the vectors for the other words will be pulled closer to \"comedian\". \n", + "\n", + "In addition to moving the vectors for nearby words closer together, each time a word is processed some vectors are moved farther away. Word2vec has two approaches to \"pushing\" vectors apart:\n", + " \n", + " * _Negative sampling_: Like it says on the tin, each time a word is pulled toward some neighbors, the vectors for a randomly chosen small set of other words are pushed away.\n", + " * _Hierarchical softmax_: Every neighboring word is pulled closer or farther from a subset of words chosen based on a tree of probabilities.\n", + "\n", + "## What is similarity? Word2vec strengths and weaknesses\n", + "\n", + "Keep in mind that word2vec operates on the assumption that frequent proximity indicates similarity, but words can be \"similar\" in various ways. They may be conceptually similar (\"royal\", \"king\", and \"throne\"), but they may also be functionally similar (\"tremendous\" and \"negligible\" are both common modifiers of \"size\"). Here is a more detailed exploration, [with examples](https://quomodocumque.wordpress.com/2016/01/15/messing-around-with-word2vec/), of what \"similarity\" means in word2vec.\n", + "\n", + "One cool thing about word2vec is that it can identify similarities between words _that never occur near one another in the corpus_. For example, consider these sentences:\n", + "\n", + "\"The dog played with an elastic ball.\"\n", + "\"Babies prefer the ball that is bouncy.\"\n", + "\"I wanted to find a ball that's elastic.\"\n", + "\"Tracy threw a bouncy ball.\"\n", + "\n", + "\"Elastic\" and \"bouncy\" are similar in meaning in the text but don't appear in the same sentence. However, both appear near \"ball\". In the process of nudging the vectors around so that \"elastic\" and \"bouncy\" are both near the vector for \"ball\", the words also become nearer to one another and their similarity can be detected.\n", + "\n", + "For a while after it was introduced, [no one was really sure why word2vec worked as well as it did](https://arxiv.org/pdf/1402.3722v1.pdf) (see last paragraph of the linked paper). A few years later, some additional math was developed to explain word2vec and similar models. If you are comfortable with both math and \"academese\", have a lot of time on your hands, and want to take a deep dive into the inner workings of word2vec, [check out this paper](https://arxiv.org/pdf/1502.03520v7.pdf) from 2016. \n", + "\n", + "One of the draws of word2vec when it first came out was that the vectors could be used to convert analogies (\"king\" is to \"queen\" as \"man\" is to \"woman\", for example) into mathematical expressions (\"king\" + \"woman\" - \"man\" = ?) and solve for the missing element (\"queen\"). This is kinda nifty.\n", + "\n", + "A drawback of word2vec is that it works best with a corpus that is at least several billion words long. Even though the word2vec algorithm is speedy, this is a a lot of data and takes a long time! Our example dataset is only two million words long, which allows us to run it in the notebook without overwhelming the kernel, but probably won't give great results. Still, let's try it!\n", + "\n", + "There are a few word2vec implementations in Python, but the general consensus is the easiest one to us is in [gensim](https://radimrehurek.com/gensim/models/word2vec.html). Now is a good time to `pip install gensim` if you don't have it yet." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Utility function to clean text.\n", + "def text_cleaner(text):\n", + " \n", + " # Visual inspection shows spaCy does not recognize the double dash '--'.\n", + " # Better get rid of it now!\n", + " text = re.sub(r'--',' ',text)\n", + " \n", + " # Get rid of headings in square brackets.\n", + " text = re.sub(\"[\\[].*?[\\]]\", \"\", text)\n", + " \n", + " # Get rid of chapter titles.\n", + " text = re.sub(r'Chapter \\d+','',text)\n", + " \n", + " # Get rid of extra whitespace.\n", + " text = ' '.join(text.split())\n", + " \n", + " return text[0:900000]\n", + "\n", + "\n", + "# Import all the Austen in the Project Gutenberg corpus.\n", + "austen = \"\"\n", + "for novel in ['persuasion','emma','sense']:\n", + " work = gutenberg.raw('austen-' + novel + '.txt')\n", + " austen = austen + work\n", + "\n", + "# Clean the data.\n", + "austen_clean = text_cleaner(austen)" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "# Parse the data. This can take some time.\n", + "nlp = spacy.load(\"en_core_web_sm\")\n", + "austen_doc = nlp(austen_clean)" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['lady', 'russell', 'steady', 'age', 'character', 'extremely', 'provide', 'thought', 'second', 'marriage', 'need', 'apology', 'public', 'apt', 'unreasonably', 'discontent', 'woman', 'marry', 'sir', 'walter', 'continue', 'singleness', 'require', 'explanation']\n", + "We have 9298 sentences and 900000 tokens.\n" + ] + } + ], + "source": [ + "# Organize the parsed doc into sentences, while filtering out punctuation\n", + "# and stop words, and converting words to lower case lemmas.\n", + "sentences = []\n", + "for sentence in austen_doc.sents:\n", + " sentence = [\n", + " token.lemma_.lower()\n", + " for token in sentence\n", + " if not token.is_stop\n", + " and not token.is_punct\n", + " ]\n", + " sentences.append(sentence)\n", + "\n", + "print(sentences[20])\n", + "print('We have {} sentences and {} tokens.'.format(len(sentences), len(austen_clean)))" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "done!\n" + ] + } + ], + "source": [ + "import gensim\n", + "from gensim.models import word2vec\n", + "\n", + "model = word2vec.Word2Vec(\n", + " sentences,\n", + " workers=4, # Number of threads to run in parallel (if your computer does parallel processing).\n", + " min_count=10, # Minimum word count threshold.\n", + " window=6, # Number of words around target word to consider.\n", + " sg=0, # Use CBOW because our corpus is small.\n", + " sample=1e-3 , # Penalize frequent words.\n", + " size=300, # Word vector length.\n", + " hs=1 # Use hierarchical softmax.\n", + ")\n", + "\n", + "print('done!')" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[('benwick', 0.9431061148643494), ('goddard', 0.9265433549880981), ('musgrove', 0.9173018932342529), ('wentworth', 0.913292646408081), ('harville', 0.908571720123291), ('clay', 0.9013205766677856), ('weston', 0.8494889140129089), ('colonel', 0.8485346436500549), ('hall', 0.8392452001571655), ('charles', 0.8278615474700928)]\n", + "0.92341024\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "C:\\Users\\Clyde\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:11: DeprecationWarning: Call to deprecated `doesnt_match` (Method will be removed in 4.0.0, use self.wv.doesnt_match() instead).\n", + " # This is added back by InteractiveShellApp.init_path()\n", + "C:\\Users\\Clyde\\Anaconda3\\lib\\site-packages\\gensim\\models\\keyedvectors.py:877: FutureWarning: arrays to stack must be passed as a \"sequence\" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.\n", + " vectors = vstack(self.word_vec(word, use_norm=True) for word in used_words).astype(REAL)\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "breakfast\n" + ] + } + ], + "source": [ + "# List of words in model.\n", + "vocab = model.wv.vocab.keys()\n", + "\n", + "print(model.wv.most_similar(positive=['lady', 'man'], negative=['woman']))\n", + "\n", + "# Similarity is calculated using the cosine, so again 1 is total\n", + "# similarity and 0 is no similarity.\n", + "print(model.wv.similarity('mr', 'mrs'))\n", + "\n", + "# One of these things is not like the other...\n", + "print(model.doesnt_match(\"breakfast marriage dinner lunch\".split()))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "source": [ + "Clearly this model is not great – while some words given above might possibly fill in the analogy woman:lady::man:?, most answers likely make little sense. You'll notice as well that re-running the model likely gives you different results, indicating random chance plays a large role here.\n", + "\n", + "We do, however, get a nice result on \"marriage\" being dissimilar to \"breakfast\", \"lunch\", and \"dinner\". \n", + "\n", + "## Drill 0\n", + "\n", + "Take a few minutes to modify the hyperparameters of this model and see how its answers change. Can you wrangle any improvements?" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "outputs": [], + "source": [ + "\n", + "model = word2vec.Word2Vec(\n", + " sentences,\n", + " workers=4, # Number of threads to run in parallel (if your computer does parallel processing).\n", + " min_count=15, # Minimum word count threshold.\n", + " window=6, # Number of words around target word to consider.\n", + " sg=0, # Use CBOW because our corpus is small.\n", + " sample=1e-3 , # Penalize frequent words.\n", + " size=300, # Word vector length.\n", + " hs=1 # Use hierarchical softmax.\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[('hall', 0.9488364458084106), ('dalrymple', 0.9423099756240845), ('room', 0.9206984043121338), ('croft', 0.9152531623840332), ('smith', 0.9136525392532349), ('colonel', 0.9126960039138794), ('kellynch', 0.9102038145065308), ('anne', 0.9078179001808167), ('future', 0.9044466018676758), ('manner', 0.9033240079879761)]\n" + ] + } + ], + "source": [ + "print(model.wv.most_similar(positive=['lady', 'man'], negative=['woman']))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "source": [ + "# Example word2vec applications\n", + "\n", + "You can use the vectors from word2vec as features in other models, or try to gain insight from the vector compositions themselves.\n", + "\n", + "Here are some neat things people have done with word2vec:\n", + "\n", + " * [Visualizing word embeddings in Jane Austen's Pride and Prejudice](http://blogger.ghostweather.com/2014/11/visualizing-word-embeddings-in-pride.html). Skip to the bottom to see a _truly honest_ account of this data scientist's process.\n", + "\n", + " * [Tracking changes in Dutch Newspapers' associations with words like 'propaganda' and 'alien' from 1950 to 1990](https://www.slideshare.net/MelvinWevers/concepts-through-time-tracing-concepts-in-dutch-newspaper-discourse-using-sequential-word-vector-spaces).\n", + "\n", + " * [Helping customers find clothing items similar to a given item but differing on one or more characteristics](http://multithreaded.stitchfix.com/blog/2015/03/11/word-is-worth-a-thousand-vectors/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Drill 1: Word2Vec on 100B+ words\n", + "\n", + "As we mentioned, word2vec really works best on a big corpus, but it can take half a day to clean such a corpus and run word2vec on it. Fortunately, there are word2vec models available that have already been trained on _really_ big corpora. They are big files, but you can download a [pretrained model of your choice here](https://github.com/3Top/word2vec-api). At minimum, the ones built with word2vec (check the \"Architecture\" column) should load smoothly using an appropriately modified version of the code below, and you can play to your heart's content.\n", + "\n", + "Because the models are so large, however, you may run into memory problems or crash the kernel. If you can't get a pretrained model to run locally, check out this [interactive web app of the Google News model](https://rare-technologies.com/word2vec-tutorial/#bonus_app) instead.\n", + "\n", + "However you access it, play around with a pretrained model. Is there anything interesting you're able to pull out about analogies, similar words, or words that don't match? Write up a quick note about your tinkering and discuss it with your mentor during your next session." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "outputs": [], + "source": [ + "# Load Google's pre-trained Word2Vec model.\n", + "model = gensim.models.KeyedVectors.load_word2vec_format ('https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz', binary=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "C:\\Users\\Clyde\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `wv` (Attribute will be removed in 4.0.0, use self instead).\n", + " \"\"\"Entry point for launching an IPython kernel.\n" + ] + } + ], + "source": [ + "vocab = model.wv.vocab.keys()" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "run_control": { + "frozen": false, + "read_only": false + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "[('queen', 0.7118191719055176)]" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Play around with your pretrained model here.\n", + "model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "hide_input": false, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.1" + }, + "toc": { + "colors": { + "hover_highlight": "#DAA520", + "running_highlight": "#FF0000", + "selected_highlight": "#FFD700" + }, + "moveMenuLeft": true, + "nav_menu": { + "height": "96px", + "width": "252px" + }, + "navigate_menu": true, + "number_sections": true, + "sideBar": true, + "threshold": 4, + "toc_cell": false, + "toc_section_display": "block", + "toc_window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}