diff --git a/Week-06-Image-Classification/Image-Classification-Exercise.ipynb b/Week-06-Image-Classification/Image-Classification-Exercise.ipynb
new file mode 100644
index 000000000..b4749502e
--- /dev/null
+++ b/Week-06-Image-Classification/Image-Classification-Exercise.ipynb
@@ -0,0 +1,476 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Image Classification\n",
+ "In this exercise, you will be classifying images about clothes. The data set you will be using is called `fashion-small.csv`.\n",
+ "\n",
+ "### Remember our main steps motto _isbe_.\n",
+ "1. i - Inspect and explore data.\n",
+ "2. s - Select and engineer features.\n",
+ "3. b - Build and train model.\n",
+ "4. e - Evaluate model.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Import your libraries\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 1. Inspect and explore our data\n",
+ "1. Load the `fashion-small.csv` data into a pandas dataframe. \n",
+ "2. Inspect / remove null values. \n",
+ "3. Inspect / remove duplicate rows. \n",
+ "4. Print out the number examples in each class aka the class balances. \n",
+ "5. Visualize at least one image."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 1. Load data into a pandas dataframe. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Inspect for null values"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 2. Inspect / remove null values. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Check for duplicates"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 3. Inspect / remove duplicate rows. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Lets look at our class balances"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 4. Print out the number examples in each class aka the class balances. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Visualize one image"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## First, we need to create a list that is just our pixel columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Grabs all of the columns \n",
+ "\n",
+ "\n",
+ "# Convert the all columns object into a regular list\n",
+ "\n",
+ "\n",
+ "# Sanity check that it is now just a list.\n",
+ "\n",
+ "\n",
+ "# Remove just the label column from the list\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Extract one row and reshape it to its original 28x28 shape and plot the reshaped image."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Select just the pixel columns and convert them to a numpy array by using .values. \n",
+ "\n",
+ "\n",
+ "# Select just one image from all the images\n",
+ "\n",
+ "\n",
+ "# Reshape the image to be a 28x28 matrix (original format of image)\n",
+ "\n",
+ "\n",
+ "# Plot reshaped image"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "___\n",
+ "# 2. Select and engineer our features.\n",
+ "1. Create our `selected_features` that is the list of the columns we are going to use as our `X` data. \n",
+ "2. Define our `X` and `y` data. \n",
+ "2. Train-test-split our `X` and `y` data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1. Create our `selected_features` that is the list of the columns we are going to use as our `X` data. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# DOING THIS AGAIN JUST FOR PRACTICE \n",
+ "\n",
+ "# Grabs all of the columns \n",
+ "selected_features = ???\n",
+ "\n",
+ "\n",
+ "# Convert the all columns object into a regular list\n",
+ "\n",
+ "\n",
+ "# Sanity check that it is now just a list.\n",
+ "\n",
+ "\n",
+ "# Remove the label column from the list\n",
+ "# This happnes 'in place'\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2. Define our `X` and `y`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 2. Define our `X` and `y` data. \n",
+ "\n",
+ "X = df[???]\n",
+ "\n",
+ "y = df[???]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3. Train-test-split our `X` and `y` data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 3. Train-test-split our `X` and `y` data\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "____\n",
+ "# 3. Build and train our model\n",
+ "1. Initalize an empty Support Vector Classifier model.\n",
+ "2. Fit that model with our training data. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 1. Initalize an empty Support Vector Classifier model.\n",
+ "from sklearn import svm\n",
+ "\n",
+ "# Initalize our Support Vector Classifier"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 2. Fit that model with our training data. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "___\n",
+ "# 4. Evaluate our model\n",
+ "1. Get a baseline accuracy score.\n",
+ "2. Make new predictions using our test data. \n",
+ "3. Print the classification report. \n",
+ "4. Plot the confusion matrix of our predicted results. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1. Get a baseline accuracy score."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### 1. Get and print a baseline accuracy score.\n",
+ "accuracy = ???\n",
+ "print(\"Accuracy %f\" % accuracy)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2. Make new predictions using our test data. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### 2. Make new predictions using our test data. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3. Print the classification report. \n",
+ "Use the sklearn helper fuction for this. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### 3. Print the classification report. \n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 4. Plot the confusion matrix of our predicted results. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### 4. Plot the confusion matrix of our predicted results.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Looking at the confusion matrix, which two clothing items were mis-classfied with eachother the most?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The classifier mistaked YOUR_ANSWER_HERE and YOUR_ANSWER_HERE the most.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print('The classifier mistaked YOUR_ANSWER_HERE and YOUR_ANSWER_HERE the most.')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "___\n",
+ "### Build a function thats input is an unfitted model, X, and y data, and runs the whole pipeline and prints a classification report and confusion matrix. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### Build a function thats input is an empty model, X, and y data, and runs the whole pipeline and prints a classification report and confusion matrix. \n",
+ "def build_and_eval_model(model, X, y, random_state=23):\n",
+ " ???"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Run LogisticRegression, RandomForest, and Multinomial Naive Bayes through the function you just built and compare the results. \n",
+ "1. Which classifier did the best, and which classifier did the worst. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# LogisticRegression\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# RandomForest\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# MultinomialNB\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " YOUR_ANSWER_HERE model did the best and YOUR_ANSWER_HERE model did the worst.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print('YOUR_ANSWER_HERE model did the best and YOUR_ANSWER_HERE model did the worst.')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Dope Extra Credit\n",
+ "### This is going to take some python trickery to get working. The files are large, in all sorts of strange directories, and in color. This will not only challenge your data science skills, but also your general 'hacker' skills. \n",
+ "\n",
+ "* Use this data provided Intel to build a classifier for color images in directories: \n",
+ "* [https://www.kaggle.com/datasets/puneet6060/intel-image-classification](https://www.kaggle.com/datasets/puneet6060/intel-image-classification)\n",
+ "* If you have any issues, just slack me. I got slack on my phone and love hearing your battle stories "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Week-06-Image-Classification/Image-Classification-Lecture.ipynb b/Week-06-Image-Classification/Image-Classification-Lecture.ipynb
new file mode 100644
index 000000000..b06a1b054
--- /dev/null
+++ b/Week-06-Image-Classification/Image-Classification-Lecture.ipynb
@@ -0,0 +1,1247 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# For data management\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "\n",
+ "# Import classifiers\n",
+ "from sklearn.svm import SVC\n",
+ "from sklearn.ensemble import RandomForestClassifier\n",
+ "from sklearn.linear_model import LogisticRegression\n",
+ "\n",
+ "# metrics contain our plot_confustion_matrix and classification_report\n",
+ "from sklearn import metrics\n",
+ "\n",
+ "# Helper fuction to splitting data\n",
+ "from sklearn.model_selection import train_test_split\n",
+ "\n",
+ "# IF YOU GET AN ERROR HERE run: pip install scikit-image\n",
+ "from skimage import io\n",
+ "from skimage.color import rgb2gray\n",
+ "\n",
+ "\n",
+ "# For plotting\n",
+ "import matplotlib.pyplot as plt\n",
+ "%matplotlib inline \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Image Classification\n",
+ "\n",
+ "### Remember our main steps motto _isbe_.\n",
+ "1. i - Inspect and explore data.\n",
+ "2. s - Select and engineer features.\n",
+ "3. b - Build and train model.\n",
+ "4. e - Evaluate model.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 1. Inspect and explore our data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " | \n",
+ " label | \n",
+ " pixel0 | \n",
+ " pixel1 | \n",
+ " pixel2 | \n",
+ " pixel3 | \n",
+ " pixel4 | \n",
+ " pixel5 | \n",
+ " pixel6 | \n",
+ " pixel7 | \n",
+ " pixel8 | \n",
+ " ... | \n",
+ " pixel774 | \n",
+ " pixel775 | \n",
+ " pixel776 | \n",
+ " pixel777 | \n",
+ " pixel778 | \n",
+ " pixel779 | \n",
+ " pixel780 | \n",
+ " pixel781 | \n",
+ " pixel782 | \n",
+ " pixel783 | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " 0 | \n",
+ " 1 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 1 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 2 | \n",
+ " 9 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 3 | \n",
+ " 9 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 4 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
5 rows × 785 columns
\n",
+ "
"
+ ],
+ "text/plain": [
+ " label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 \\\n",
+ "0 1 0 0 0 0 0 0 0 0 \n",
+ "1 0 0 0 0 0 0 0 0 0 \n",
+ "2 9 0 0 0 0 0 0 0 0 \n",
+ "3 9 0 0 0 0 0 0 0 0 \n",
+ "4 0 0 0 0 0 0 0 0 0 \n",
+ "\n",
+ " pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 pixel779 \\\n",
+ "0 0 ... 0 0 0 0 0 0 \n",
+ "1 0 ... 0 0 0 0 0 0 \n",
+ "2 0 ... 0 0 0 0 0 0 \n",
+ "3 0 ... 0 0 0 0 0 0 \n",
+ "4 0 ... 0 0 0 0 0 0 \n",
+ "\n",
+ " pixel780 pixel781 pixel782 pixel783 \n",
+ "0 0 0 0 0 \n",
+ "1 0 0 0 0 \n",
+ "2 0 0 0 0 \n",
+ "3 0 0 0 0 \n",
+ "4 0 0 0 0 \n",
+ "\n",
+ "[5 rows x 785 columns]"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df = pd.read_csv('data/digits-small.csv')\n",
+ "df.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Inspect for null values"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "label 0\n",
+ "pixel0 0\n",
+ "pixel1 0\n",
+ "pixel2 0\n",
+ "pixel3 0\n",
+ " ..\n",
+ "pixel779 0\n",
+ "pixel780 0\n",
+ "pixel781 0\n",
+ "pixel782 0\n",
+ "pixel783 0\n",
+ "Length: 785, dtype: int64"
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.isnull().sum()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.isnull().sum().sum()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Check for duplicates"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0"
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Check for duplicates\n",
+ "df.duplicated().sum()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Lets look at our class balances"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(1 440\n",
+ " 0 439\n",
+ " 7 437\n",
+ " 9 433\n",
+ " 2 425\n",
+ " 3 415\n",
+ " 6 410\n",
+ " 5 408\n",
+ " 4 398\n",
+ " 8 395\n",
+ " Name: label, dtype: int64,\n",
+ " 1 0.104762\n",
+ " 0 0.104524\n",
+ " 7 0.104048\n",
+ " 9 0.103095\n",
+ " 2 0.101190\n",
+ " 3 0.098810\n",
+ " 6 0.097619\n",
+ " 5 0.097143\n",
+ " 4 0.094762\n",
+ " 8 0.094048\n",
+ " Name: label, dtype: float64)"
+ ]
+ },
+ "execution_count": 6,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.label.value_counts(), df.label.value_counts(normalize=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Lets visualize one of the images..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " | \n",
+ " label | \n",
+ " pixel0 | \n",
+ " pixel1 | \n",
+ " pixel2 | \n",
+ " pixel3 | \n",
+ " pixel4 | \n",
+ " pixel5 | \n",
+ " pixel6 | \n",
+ " pixel7 | \n",
+ " pixel8 | \n",
+ " ... | \n",
+ " pixel774 | \n",
+ " pixel775 | \n",
+ " pixel776 | \n",
+ " pixel777 | \n",
+ " pixel778 | \n",
+ " pixel779 | \n",
+ " pixel780 | \n",
+ " pixel781 | \n",
+ " pixel782 | \n",
+ " pixel783 | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " 0 | \n",
+ " 1 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 1 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 2 | \n",
+ " 9 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 3 | \n",
+ " 9 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ " 4 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " ... | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ " 0 | \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
5 rows × 785 columns
\n",
+ "
"
+ ],
+ "text/plain": [
+ " label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 \\\n",
+ "0 1 0 0 0 0 0 0 0 0 \n",
+ "1 0 0 0 0 0 0 0 0 0 \n",
+ "2 9 0 0 0 0 0 0 0 0 \n",
+ "3 9 0 0 0 0 0 0 0 0 \n",
+ "4 0 0 0 0 0 0 0 0 0 \n",
+ "\n",
+ " pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 pixel779 \\\n",
+ "0 0 ... 0 0 0 0 0 0 \n",
+ "1 0 ... 0 0 0 0 0 0 \n",
+ "2 0 ... 0 0 0 0 0 0 \n",
+ "3 0 ... 0 0 0 0 0 0 \n",
+ "4 0 ... 0 0 0 0 0 0 \n",
+ "\n",
+ " pixel780 pixel781 pixel782 pixel783 \n",
+ "0 0 0 0 0 \n",
+ "1 0 0 0 0 \n",
+ "2 0 0 0 0 \n",
+ "3 0 0 0 0 \n",
+ "4 0 0 0 0 \n",
+ "\n",
+ "[5 rows x 785 columns]"
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### First, we need to create a list that is just our pixel columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# This grabs all of the columns \n",
+ "pixel_cols = df.columns\n",
+ "\n",
+ "# This is currently a pandas index object\n",
+ "print(type(pixel_cols))\n",
+ "\n",
+ "# Convert the pandas index object into a regular list\n",
+ "pixel_cols = list(pixel_cols)\n",
+ "\n",
+ "# Sanity check that it is now just a list.\n",
+ "print(type(pixel_cols))\n",
+ "\n",
+ "# Remove the label column from the list\n",
+ "# So all that remains are the pixel columns\n",
+ "# This happnes 'in place'\n",
+ "pixel_cols.remove('label')\n",
+ "\n",
+ "#pixel_cols"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Extract one row and reshape it to its original shape."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAApcAAAKTCAYAAABM/SOHAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAiSUlEQVR4nO3df4zXhX348ddH0Y/ojgsUuR/huF0bTNdCSCoWJK1iNy5eFlKEJlI3g1mmRcEEiXZDu3jpFm4h0bYLrVv9g8kmqcnSWlNdLatyaq0LZZoaWwQjlptyQYm9A8qOUN/7Y1/v6wEiB683n8/B45F8Ej4/eH1e5t13ffq+H59KURRFAABAgvNqvQAAAGcPcQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAECacbVe4GjvvfdevPXWW9HQ0BCVSqXW6wAAnPOKooj9+/dHa2trnHfeia9N1l1cvvXWW9HW1lbrNQAAOEpfX19MnTr1hK+puy+LNzQ01HoFAACO42Q6re7i0pfCAQDq08l0Wt3FJQAAY5e4BAAgjbgEACCNuAQAII24BAAgTWlx+Z3vfCc6OjrioosuissvvzyeffbZst4KAIA6UUpcPvLII7Fq1aq455574sUXX4zPf/7z0dXVFbt37y7j7QAAqBOVoiiK7KFz5syJz3zmM/HAAw8MP/ZHf/RHsWjRoujp6Rnx2qGhoRgaGhq+Pzg46BN6AADq0MDAQEyYMOGEr0m/cnn48OHYtm1bdHZ2jni8s7Mznn/++WNe39PTE42NjcM3YQkAMHalx+U777wTv//976OpqWnE401NTdHf33/M69esWRMDAwPDt76+vuyVAAA4Q8aVNfjojwcqiuK4HxlUrVajWq2WtQYAAGdQ+pXLyZMnx/nnn3/MVcq9e/ceczUTAICzS3pcXnjhhXH55ZfH5s2bRzy+efPmmDdvXvbbAQBQR0r5svjq1avjxhtvjNmzZ8eVV14Z3/3ud2P37t2xfPnyMt4OAIA6UUpcXn/99bFv3774+te/Hnv27IkZM2bEE088Ee3t7WW8HQAAdaKU33N5OgYHB6OxsbHWawAAcJSa/J5LAADOXeISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANONqvQBQWx//+MdrvcKovP7667VeoS7MnTu3lLmLFi0qZW5ra2spc5csWVLK3F/96lelzL322mtLmbtv375S5sKpcOUSAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA06XHZ3d0dlUplxK25uTn7bQAAqEOl/BL1T3/60/Ef//Efw/fPP//8Mt4GAIA6U0pcjhs37qSvVg4NDcXQ0NDw/cHBwTJWAgDgDCjley537twZra2t0dHREUuXLj3hx7X19PREY2Pj8K2tra2MlQAAOAPS43LOnDmxcePGePLJJ+PBBx+M/v7+mDdv3od+7umaNWtiYGBg+NbX15e9EgAAZ0j6l8W7urqG/zxz5sy48sor4xOf+EQ89NBDsXr16mNeX61Wo1qtZq8BAEANlP6riC655JKYOXNm7Ny5s+y3AgCgxkqPy6Ghofj1r38dLS0tZb8VAAA1lh6Xd955Z/T29sauXbviP//zP+NLX/pSDA4OxrJly7LfCgCAOpP+PZf//d//HV/+8pfjnXfeiUsvvTTmzp0bL7zwQrS3t2e/FQAAdSY9Lr/3ve9ljwQAYIzw2eIAAKQRlwAApBGXAACkKeWzxYGx40Qfz3oumTVrVilz//Iv/7KUubfcckspc8eNK+dfC2V9+tr+/ftLmVvWRxGXte91111XytznnnuulLlvv/12KXOpD65cAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkGZcrRcAzk7nn39+KXOXLl1aytx/+Zd/KWVuURSlzD148GApc3t6ekqZ+61vfauUuYsWLSpl7je+8Y0xNffWW28tZe5dd91Vytz77ruvlLnUB1cuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASFMpiqKo9RIfNDg4GI2NjbVeAzhNc+fOLWXuz372s1LmViqVUub+0z/9Uylzv/GNb5Qyd8eOHaXMLcusWbNKmftf//Vfpcwty1tvvVXK3AULFpQyd/v27aXMpXwDAwMxYcKEE77GlUsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSjKv1AkBtNTQ0lDL34YcfLmVupVIpZe7GjRtLmXvrrbeWMpf/c+edd5Yyt6z/nb355pulzP3bv/3bUuZu3769lLmc3Vy5BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAIM24Wi8A1NbHP/7xUub+4R/+YSlzi6IoZe6tt95aytyxZuLEiaXMveOOO0qZu3Tp0lLm/uY3vyll7m233VbK3H//938vZS6cClcuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIM+q4fOaZZ2LhwoXR2toalUolHn300RHPF0UR3d3d0draGuPHj4/58+fHK6+8krUvAAB1bNRxefDgwZg1a1asX7/+uM+vW7cu7r///li/fn1s3bo1mpubY8GCBbF///7TXhYAgPo26k/o6erqiq6uruM+VxRFfPOb34x77rknFi9eHBERDz30UDQ1NcWmTZviK1/5yjF/Z2hoKIaGhobvDw4OjnYlAADqROr3XO7atSv6+/ujs7Nz+LFqtRpXX311PP/888f9Oz09PdHY2Dh8a2try1wJAIAzKDUu+/v7IyKiqalpxONNTU3Dzx1tzZo1MTAwMHzr6+vLXAkAgDNo1F8WPxmVSmXE/aIojnnsfdVqNarVahlrAABwhqVeuWxubo6IOOYq5d69e4+5mgkAwNknNS47Ojqiubk5Nm/ePPzY4cOHo7e3N+bNm5f5VgAA1KFRf1n8wIED8dprrw3f37VrV7z00ksxadKkmDZtWqxatSrWrl0b06dPj+nTp8fatWvj4osvjhtuuCF1cQAA6s+o4/IXv/hFXHPNNcP3V69eHRERy5Yti3/+53+Or371q3Ho0KG47bbb4t133405c+bET37yk2hoaMjbGgCAujTquJw/f34URfGhz1cqleju7o7u7u7T2QsAgDHIZ4sDAJBGXAIAkEZcAgCQppRfog4w1lx00UWlzD106FApcz/2sY+VMvdHP/pRKXPnzJlTytzf/OY3pcz90z/901Lm/upXvyplLtQTVy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIM67WCwC19frrr5cy96c//Wkpc//4j/+4lLk7duwoZe7dd99dyty/+Iu/KGXuZz/72VLm/uxnPytl7s0331zK3O3bt5cyF84FrlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQplIURVHrJT5ocHAwGhsba70GcJomTpxYytze3t5S5s6cObOUuXX2f7Ef6c033yxlbltbWylzgTNrYGAgJkyYcMLXuHIJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAmnG1XgA4O7377rulzJ0zZ04pcw8ePFjK3KIoSplblt/+9relzJ04cWIpc8v63xlw6ly5BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAIM24Wi8AnJ0uueSSUuauW7eulLll2bFjRylzJ06cWMrcT33qU6XM/dKXvlTK3AcffLCUucCpc+USAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA0o47LZ555JhYuXBitra1RqVTi0UcfHfH8TTfdFJVKZcRt7ty5WfsCAFDHRh2XBw8ejFmzZsX69es/9DXXXntt7NmzZ/j2xBNPnNaSAACMDaP+hJ6urq7o6uo64Wuq1Wo0Nzef1LyhoaEYGhoavj84ODjalQAAqBOlfM/lli1bYsqUKXHZZZfFzTffHHv37v3Q1/b09ERjY+Pwra2trYyVAAA4A9LjsqurKx5++OF46qmn4r777outW7fGF77whRFXJz9ozZo1MTAwMHzr6+vLXgkAgDNk1F8W/yjXX3/98J9nzJgRs2fPjvb29nj88cdj8eLFx7y+Wq1GtVrNXgMAgBoo/VcRtbS0RHt7e+zcubPstwIAoMZKj8t9+/ZFX19ftLS0lP1WAADU2Ki/LH7gwIF47bXXhu/v2rUrXnrppZg0aVJMmjQpuru7Y8mSJdHS0hJvvPFG3H333TF58uS47rrrUhcHAKD+jDouf/GLX8Q111wzfH/16tUREbFs2bJ44IEH4uWXX46NGzfGb3/722hpaYlrrrkmHnnkkWhoaMjbGgCAujTquJw/f34URfGhzz/55JOntRAAAGOXzxYHACCNuAQAII24BAAgTfovUQeI+P8/7Jdt+fLlpcx98803S5k7d+7cUub++Z//eSlz/+Ef/qGUuUuWLCll7oMPPljKXODUuXIJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAmkpRFEWtl/igwcHBaGxsrPUacM649dZbS5n77W9/u5S5b775Zilz29raSplblokTJ5Yyd8eOHaXMPXz4cClzP/WpT5Uyd2BgoJS5MNYNDAzEhAkTTvgaVy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIM67WCwAn5+tf/3opc++4445S5j788MOlzF25cmUpc8eaI0eOlDJ3//79pcz92Mc+VsrcceP8awzqjSuXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBlX6wXgbPNXf/VXpcz92te+Vsrcl156qZS5t9xySylzDx06VMrcsaa7u7uUue3t7aXM/da3vlXK3H379pUyFzh1rlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQRlwCAJBGXAIAkEZcAgCQplIURVHrJT5ocHAwGhsba70G54CJEyeWMvfVV18tZe7Q0FApcxcsWFDK3O3bt5cyd6y55ZZbSpm7bt26Uua+/fbbpcy96qqrSpm7Z8+eUuYCxzcwMBATJkw44WtcuQQAII24BAAgjbgEACCNuAQAII24BAAgjbgEACCNuAQAIM2o4rKnpyeuuOKKaGhoiClTpsSiRYuO+Z1+RVFEd3d3tLa2xvjx42P+/PnxyiuvpC4NAEB9GlVc9vb2xooVK+KFF16IzZs3x5EjR6KzszMOHjw4/Jp169bF/fffH+vXr4+tW7dGc3NzLFiwIPbv35++PAAA9WXcaF784x//eMT9DRs2xJQpU2Lbtm1x1VVXRVEU8c1vfjPuueeeWLx4cUREPPTQQ9HU1BSbNm2Kr3zlK8fMHBoaGvHJI4ODg6fyzwEAQB04re+5HBgYiIiISZMmRUTErl27or+/Pzo7O4dfU61W4+qrr47nn3/+uDN6enqisbFx+NbW1nY6KwEAUEOnHJdFUcTq1avjc5/7XMyYMSMiIvr7+yMioqmpacRrm5qahp872po1a2JgYGD41tfXd6orAQBQY6P6svgHrVy5Mn75y1/Gc889d8xzlUplxP2iKI557H3VajWq1eqprgEAQB05pSuXt99+ezz22GPx9NNPx9SpU4cfb25ujog45irl3r17j7maCQDA2WdUcVkURaxcuTK+//3vx1NPPRUdHR0jnu/o6Ijm5ubYvHnz8GOHDx+O3t7emDdvXs7GAADUrVF9WXzFihWxadOm+OEPfxgNDQ3DVygbGxtj/PjxUalUYtWqVbF27dqYPn16TJ8+PdauXRsXX3xx3HDDDaX8AwAAUD9GFZcPPPBARETMnz9/xOMbNmyIm266KSIivvrVr8ahQ4fitttui3fffTfmzJkTP/nJT6KhoSFlYQAA6teo4rIoio98TaVSie7u7uju7j7VnQAAGKN8tjgAAGnEJQAAacQlAABpTvmXqMNY97Wvfa2UuZMnTy5l7vLly0uZu3379lLmjjVlfZ/4XXfdVcrcn//856XMvfHGG0uZu2fPnlLmAvXHlUsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSiEsAANKISwAA0ohLAADSVIqiKGq9xAcNDg5GY2NjrdfgHPDiiy+WMvfVV18tZe7SpUtLmTt+/PhS5i5atKiUuUuWLCll7uLFi0uZu3HjxlLm3nXXXaXMffvtt0uZC5wdBgYGYsKECSd8jSuXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBGXAACkEZcAAKQRlwAApBlX6wXgbFOpVEqZu2TJklLm/tmf/Vkpc//kT/6klLkHDhwoZe7ixYtLmfujH/2olLlHjhwpZS7A6XLlEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTiEgCANOISAIA04hIAgDTjar0A1MpPf/rTUubecccdpcydN29eKXP/7d/+rZS5s2fPLmXujh07SpkLQA5XLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEhTKYqiqPUSHzQ4OBiNjY21XgMAgKMMDAzEhAkTTvgaVy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgzqrjs6emJK664IhoaGmLKlCmxaNGiePXVV0e85qabbopKpTLiNnfu3NSlAQCoT6OKy97e3lixYkW88MILsXnz5jhy5Eh0dnbGwYMHR7zu2muvjT179gzfnnjiidSlAQCoT+NG8+If//jHI+5v2LAhpkyZEtu2bYurrrpq+PFqtRrNzc0nNXNoaCiGhoaG7w8ODo5mJQAA6shpfc/lwMBARERMmjRpxONbtmyJKVOmxGWXXRY333xz7N2790Nn9PT0RGNj4/Ctra3tdFYCAKCGTvmzxYuiiC9+8Yvx7rvvxrPPPjv8+COPPBJ/8Ad/EO3t7bFr1674m7/5mzhy5Ehs27YtqtXqMXOOd+VSYAIA1J+T+WzxU47LFStWxOOPPx7PPfdcTJ069UNft2fPnmhvb4/vfe97sXjx4o+cOzg4GI2NjaeyEgAAJTqZuBzV91y+7/bbb4/HHnssnnnmmROGZURES0tLtLe3x86dO0/lrQAAGENGFZdFUcTtt98eP/jBD2LLli3R0dHxkX9n37590dfXFy0tLae8JAAAY8OofqBnxYoV8a//+q+xadOmaGhoiP7+/ujv749Dhw5FRMSBAwfizjvvjJ///OfxxhtvxJYtW2LhwoUxefLkuO6660r5BwAAoI4UoxARx71t2LChKIqi+N3vfld0dnYWl156aXHBBRcU06ZNK5YtW1bs3r37pN9jYGDgQ9/Hzc3Nzc3Nzc2tdreBgYGPbLlT/oGesviBHgCA+nQyP9Djs8UBAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEgjLgEASCMuAQBIIy4BAEhTd3FZFEWtVwAA4DhOptPqLi73799f6xUAADiOk+m0SlFnlwrfe++9eOutt6KhoSEqlcoJXzs4OBhtbW3R19cXEyZMOEMbcroct7HJcRubHLexyXEbm87m41YURezfvz9aW1vjvPNOfG1y3Bna6aSdd955MXXq1FH9nQkTJpx1B/Fc4LiNTY7b2OS4jU2O29h0th63xsbGk3pd3X1ZHACAsUtcAgCQZkzHZbVajXvvvTeq1WqtV2EUHLexyXEbmxy3sclxG5sct/9Tdz/QAwDA2DWmr1wCAFBfxCUAAGnEJQAAacQlAABpxCUAAGnGdFx+5zvfiY6Ojrjooovi8ssvj2effbbWK3EC3d3dUalURtyam5trvRZHeeaZZ2LhwoXR2toalUolHn300RHPF0UR3d3d0draGuPHj4/58+fHK6+8UptlGfZRx+2mm2465vybO3dubZZlWE9PT1xxxRXR0NAQU6ZMiUWLFsWrr7464jXOufpzMsftXD7nxmxcPvLII7Fq1aq455574sUXX4zPf/7z0dXVFbt37671apzApz/96dizZ8/w7eWXX671Shzl4MGDMWvWrFi/fv1xn1+3bl3cf//9sX79+ti6dWs0NzfHggULYv/+/Wd4Uz7oo45bRMS111474vx74oknzuCGHE9vb2+sWLEiXnjhhdi8eXMcOXIkOjs74+DBg8Ovcc7Vn5M5bhHn8DlXjFGf/exni+XLl4947JOf/GTx13/91zXaiI9y7733FrNmzar1GoxCRBQ/+MEPhu+/9957RXNzc/H3f//3w4/9z//8T9HY2Fj84z/+Yw025HiOPm5FURTLli0rvvjFL9ZkH07e3r17i4goent7i6Jwzo0VRx+3oji3z7kxeeXy8OHDsW3btujs7BzxeGdnZzz//PM12oqTsXPnzmhtbY2Ojo5YunRpvP7667VeiVHYtWtX9Pf3jzj3qtVqXH311c69MWDLli0xZcqUuOyyy+Lmm2+OvXv31noljjIwMBAREZMmTYoI59xYcfRxe9+5es6Nybh855134ve//300NTWNeLypqSn6+/trtBUfZc6cObFx48Z48skn48EHH4z+/v6YN29e7Nu3r9arcZLeP7+ce2NPV1dXPPzww/HUU0/FfffdF1u3bo0vfOELMTQ0VOvV+H+KoojVq1fH5z73uZgxY0ZEOOfGguMdt4hz+5wbV+sFTkelUhlxvyiKYx6jfnR1dQ3/eebMmXHllVfGJz7xiXjooYdi9erVNdyM0XLujT3XX3/98J9nzJgRs2fPjvb29nj88cdj8eLFNdyM961cuTJ++ctfxnPPPXfMc865+vVhx+1cPufG5JXLyZMnx/nnn3/Mf7Xt3bv3mP+6o35dcsklMXPmzNi5c2etV+Ekvf/T/c69sa+lpSXa29udf3Xi9ttvj8ceeyyefvrpmDp16vDjzrn69mHH7XjOpXNuTMblhRdeGJdffnls3rx5xOObN2+OefPm1WgrRmtoaCh+/etfR0tLS61X4SR1dHREc3PziHPv8OHD0dvb69wbY/bt2xd9fX3OvxoriiJWrlwZ3//+9+Opp56Kjo6OEc875+rTRx234zmXzrkx+2Xx1atXx4033hizZ8+OK6+8Mr773e/G7t27Y/ny5bVejQ9x5513xsKFC2PatGmxd+/e+Lu/+7sYHByMZcuW1Xo1PuDAgQPx2muvDd/ftWtXvPTSSzFp0qSYNm1arFq1KtauXRvTp0+P6dOnx9q1a+Piiy+OG264oYZbc6LjNmnSpOju7o4lS5ZES0tLvPHGG3H33XfH5MmT47rrrqvh1qxYsSI2bdoUP/zhD6OhoWH4CmVjY2OMHz8+KpWKc64OfdRxO3DgwLl9ztXwJ9VP27e//e2ivb29uPDCC4vPfOYzI34FAPXn+uuvL1paWooLLrigaG1tLRYvXly88sortV6Lozz99NNFRBxzW7ZsWVEU//erUe69996iubm5qFarxVVXXVW8/PLLtV2aEx633/3ud0VnZ2dx6aWXFhdccEExbdq0YtmyZcXu3btrvfY573jHLCKKDRs2DL/GOVd/Puq4nevnXKUoiuJMxiwAAGevMfk9lwAA1CdxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAGnEJAEAacQkAQBpxCQBAmv8FoRXuEEA32x0AAAAASUVORK5CYII=\n",
+ "text/plain": [
+ "