diff --git a/lab-dw-data-cleaning-and-formatting.ipynb b/lab-dw-data-cleaning-and-formatting.ipynb index cdfc3c6..15b75d3 100644 --- a/lab-dw-data-cleaning-and-formatting.ipynb +++ b/lab-dw-data-cleaning-and-formatting.ipynb @@ -1,371 +1,721 @@ { - "cells": [ - { - "cell_type": "markdown", - "id": "25d7736c-ba17-4aff-b6bb-66eba20fbf4e", - "metadata": { - "id": "25d7736c-ba17-4aff-b6bb-66eba20fbf4e" - }, - "source": [ - "# Lab | Data Cleaning and Formatting" - ] - }, - { - "cell_type": "markdown", - "id": "d1973e9e-8be6-4039-b70e-d73ee0d94c99", - "metadata": { - "id": "d1973e9e-8be6-4039-b70e-d73ee0d94c99" - }, - "source": [ - "In this lab, we will be working with the customer data from an insurance company, which can be found in the CSV file located at the following link: https://raw.githubusercontent.com/data-bootcamp-v4/data/main/file1.csv\n" - ] - }, - { - "cell_type": "markdown", - "id": "31b8a9e7-7db9-4604-991b-ef6771603e57", - "metadata": { - "id": "31b8a9e7-7db9-4604-991b-ef6771603e57" - }, - "source": [ - "# Challenge 1: Data Cleaning and Formatting" - ] - }, - { - "cell_type": "markdown", - "id": "81553f19-9f2c-484b-8940-520aff884022", - "metadata": { - "id": "81553f19-9f2c-484b-8940-520aff884022" - }, - "source": [ - "## Exercise 1: Cleaning Column Names" - ] - }, - { - "cell_type": "markdown", - "id": "34a929f4-1be4-4fa8-adda-42ffd920be90", - "metadata": { - "id": "34a929f4-1be4-4fa8-adda-42ffd920be90" - }, - "source": [ - "To ensure consistency and ease of use, standardize the column names of the dataframe. Start by taking a first look at the dataframe and identifying any column names that need to be modified. Use appropriate naming conventions and make sure that column names are descriptive and informative.\n", - "\n", - "*Hint*:\n", - "- *Column names should be in lower case*\n", - "- *White spaces in column names should be replaced by `_`*\n", - "- *`st` could be replaced for `state`*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "5810735c-8056-4442-bbf2-dda38d3e284a", - "metadata": { - "id": "5810735c-8056-4442-bbf2-dda38d3e284a" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "id": "9cb501ec-36ff-4589-b872-6252bb150316", - "metadata": { - "id": "9cb501ec-36ff-4589-b872-6252bb150316" - }, - "source": [ - "## Exercise 2: Cleaning invalid Values" - ] - }, - { - "cell_type": "markdown", - "id": "771fdcf3-8e20-4b06-9c24-3a93ba2b0909", - "metadata": { - "id": "771fdcf3-8e20-4b06-9c24-3a93ba2b0909" - }, - "source": [ - "The dataset contains columns with inconsistent and incorrect values that could affect the accuracy of our analysis. Therefore, we need to clean these columns to ensure that they only contain valid data.\n", - "\n", - "Note that this exercise will focus only on cleaning inconsistent values and will not involve handling null values (NaN or None).\n", - "\n", - "*Hint*:\n", - "- *Gender column contains various inconsistent values such as \"F\", \"M\", \"Femal\", \"Male\", \"female\", which need to be standardized, for example, to \"M\" and \"F\".*\n", - "- *State abbreviations be can replaced with its full name, for example \"AZ\": \"Arizona\", \"Cali\": \"California\", \"WA\": \"Washington\"*\n", - "- *In education, \"Bachelors\" could be replaced by \"Bachelor\"*\n", - "- *In Customer Lifetime Value, delete the `%` character*\n", - "- *In vehicle class, \"Sports Car\", \"Luxury SUV\" and \"Luxury Car\" could be replaced by \"Luxury\"*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3f8ee5cb-50ab-48af-8a9f-9a389804033c", - "metadata": { - "id": "3f8ee5cb-50ab-48af-8a9f-9a389804033c" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "id": "85ff78ce-0174-4890-9db3-8048b7d7d2d0", - "metadata": { - "id": "85ff78ce-0174-4890-9db3-8048b7d7d2d0" - }, - "source": [ - "## Exercise 3: Formatting data types" - ] - }, - { - "cell_type": "markdown", - "id": "b91c2cf8-79a2-4baf-9f65-ff2fb22270bd", - "metadata": { - "id": "b91c2cf8-79a2-4baf-9f65-ff2fb22270bd" - }, - "source": [ - "The data types of many columns in the dataset appear to be incorrect. This could impact the accuracy of our analysis. To ensure accurate analysis, we need to correct the data types of these columns. Please update the data types of the columns as appropriate." - ] - }, - { - "cell_type": "markdown", - "id": "43e5d853-ff9e-43b2-9d92-aef2f78764f3", - "metadata": { - "id": "43e5d853-ff9e-43b2-9d92-aef2f78764f3" - }, - "source": [ - "It is important to note that this exercise does not involve handling null values (NaN or None)." - ] - }, - { - "cell_type": "markdown", - "id": "329ca691-9196-4419-8969-3596746237a1", - "metadata": { - "id": "329ca691-9196-4419-8969-3596746237a1" - }, - "source": [ - "*Hint*:\n", - "- *Customer lifetime value should be numeric*\n", - "- *Number of open complaints has an incorrect format. Look at the different values it takes with `unique()` and take the middle value. As an example, 1/5/00 should be 5. Number of open complaints is a string - remember you can use `split()` to deal with it and take the number you need. Finally, since it should be numeric, cast the column to be in its proper type.*" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "eb8f5991-73e9-405f-bf1c-6b7c589379a9", - "metadata": { - "id": "eb8f5991-73e9-405f-bf1c-6b7c589379a9" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, - { - "cell_type": "markdown", - "id": "14c52e28-2d0c-4dd2-8bd5-3476e34fadc1", - "metadata": { - "id": "14c52e28-2d0c-4dd2-8bd5-3476e34fadc1" - }, - "source": [ - "## Exercise 4: Dealing with Null values" - ] - }, - { - "cell_type": "markdown", - "id": "34b9a20f-7d32-4417-975e-1b4dfb0e16cd", - "metadata": { - "id": "34b9a20f-7d32-4417-975e-1b4dfb0e16cd" - }, - "source": [ - "Identify any columns with null or missing values. Identify how many null values each column has. You can use the `isnull()` function in pandas to find columns with null values.\n", - "\n", - "Decide on a strategy for handling the null values. There are several options, including:\n", - "\n", - "- Drop the rows or columns with null values\n", - "- Fill the null values with a specific value (such as the column mean or median for numerical variables, and mode for categorical variables)\n", - "- Fill the null values with the previous or next value in the column\n", - "- Fill the null values based on a more complex algorithm or model (note: we haven't covered this yet)\n", - "\n", - "Implement your chosen strategy to handle the null values. You can use the `fillna()` function in pandas to fill null values or `dropna()` function to drop null values.\n", - "\n", - "Verify that your strategy has successfully handled the null values. You can use the `isnull()` function again to check if there are still null values in the dataset.\n", - "\n", - "Remember to document your process and explain your reasoning for choosing a particular strategy for handling null values.\n", - "\n", - "After formatting data types, as a last step, convert all the numeric variables to integers." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "f184fc35-7831-4836-a0a5-e7f99e01b40e", - "metadata": { - "id": "f184fc35-7831-4836-a0a5-e7f99e01b40e" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, + "cells": [ + { + "cell_type": "markdown", + "id": "25d7736c-ba17-4aff-b6bb-66eba20fbf4e", + "metadata": { + "id": "25d7736c-ba17-4aff-b6bb-66eba20fbf4e" + }, + "source": [ + "# Lab | Data Cleaning and Formatting" + ] + }, + { + "cell_type": "markdown", + "id": "d1973e9e-8be6-4039-b70e-d73ee0d94c99", + "metadata": { + "id": "d1973e9e-8be6-4039-b70e-d73ee0d94c99" + }, + "source": [ + "In this lab, we will be working with the customer data from an insurance company, which can be found in the CSV file located at the following link: https://raw.githubusercontent.com/data-bootcamp-v4/data/main/file1.csv\n" + ] + }, + { + "cell_type": "markdown", + "id": "31b8a9e7-7db9-4604-991b-ef6771603e57", + "metadata": { + "id": "31b8a9e7-7db9-4604-991b-ef6771603e57" + }, + "source": [ + "# Challenge 1: Data Cleaning and Formatting" + ] + }, + { + "cell_type": "markdown", + "id": "81553f19-9f2c-484b-8940-520aff884022", + "metadata": { + "id": "81553f19-9f2c-484b-8940-520aff884022" + }, + "source": [ + "## Exercise 1: Cleaning Column Names" + ] + }, + { + "cell_type": "markdown", + "id": "34a929f4-1be4-4fa8-adda-42ffd920be90", + "metadata": { + "id": "34a929f4-1be4-4fa8-adda-42ffd920be90" + }, + "source": [ + "To ensure consistency and ease of use, standardize the column names of the dataframe. Start by taking a first look at the dataframe and identifying any column names that need to be modified. Use appropriate naming conventions and make sure that column names are descriptive and informative.\n", + "\n", + "*Hint*:\n", + "- *Column names should be in lower case*\n", + "- *White spaces in column names should be replaced by `_`*\n", + "- *`st` could be replaced for `state`*" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "5810735c-8056-4442-bbf2-dda38d3e284a", + "metadata": { + "id": "5810735c-8056-4442-bbf2-dda38d3e284a" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "98416351-e999-4156-9834-9b00a311adfa", - "metadata": { - "id": "98416351-e999-4156-9834-9b00a311adfa" - }, - "source": [ - "## Exercise 5: Dealing with duplicates" - ] - }, + "name": "stdout", + "output_type": "stream", + "text": [ + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n", + "| | customer | states | gender | education | customer_lifetime_value | income | monthly_premium_auto | number_of_open_complaints | policy_type | vehicle_class | total_claim_amount |\n", + "+====+============+============+==========+======================+===========================+==========+========================+=============================+================+=================+======================+\n", + "| 0 | RB50392 | Washington | nan | Master | nan | 0 | 1000 | 1/0/00 | Personal Auto | Four-Door Car | 2.70493 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n", + "| 1 | QZ44356 | Arizona | F | Bachelor | 697953.59% | 0 | 94 | 1/0/00 | Personal Auto | Four-Door Car | 1131.46 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n", + "| 2 | AI49188 | Nevada | F | Bachelor | 1288743.17% | 48767 | 108 | 1/0/00 | Personal Auto | Two-Door Car | 566.472 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n", + "| 3 | WW63253 | California | M | Bachelor | 764586.18% | 0 | 106 | 1/0/00 | Corporate Auto | SUV | 529.881 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n", + "| 4 | GA49547 | Washington | M | High School or Below | 536307.65% | 36357 | 68 | 1/0/00 | Personal Auto | Four-Door Car | 17.2693 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+\n" + ] + } + ], + "source": [ + "import pandas as pd\n", + "from tabulate import tabulate\n", + "\n", + "url = \"https://raw.githubusercontent.com/data-bootcamp-v4/data/main/file1.csv\"\n", + "\n", + "Data_Challenge = pd.read_csv(url)\n", + "\n", + "Data_Challenge.columns = Data_Challenge.columns.str.lower()\n", + "\n", + "Data_Challenge = Data_Challenge.rename(columns = {\"st\": \"states\"})\n", + "\n", + "Data_Challenge.columns = Data_Challenge.columns.str.replace(\" \", \"_\")\n", + "\n", + "print(tabulate(Data_Challenge.head(), headers= \"keys\" , tablefmt = \"grid\"))" + ] + }, + { + "cell_type": "markdown", + "id": "9cb501ec-36ff-4589-b872-6252bb150316", + "metadata": { + "id": "9cb501ec-36ff-4589-b872-6252bb150316" + }, + "source": [ + "## Exercise 2: Cleaning invalid Values" + ] + }, + { + "cell_type": "markdown", + "id": "771fdcf3-8e20-4b06-9c24-3a93ba2b0909", + "metadata": { + "id": "771fdcf3-8e20-4b06-9c24-3a93ba2b0909" + }, + "source": [ + "The dataset contains columns with inconsistent and incorrect values that could affect the accuracy of our analysis. Therefore, we need to clean these columns to ensure that they only contain valid data.\n", + "\n", + "Note that this exercise will focus only on cleaning inconsistent values and will not involve handling null values (NaN or None).\n", + "\n", + "*Hint*:\n", + "- *Gender column contains various inconsistent values such as \"F\", \"M\", \"Femal\", \"Male\", \"female\", which need to be standardized, for example, to \"M\" and \"F\".*\n", + "- *State abbreviations be can replaced with its full name, for example \"AZ\": \"Arizona\", \"Cali\": \"California\", \"WA\": \"Washington\"*\n", + "- *In education, \"Bachelors\" could be replaced by \"Bachelor\"*\n", + "- *In Customer Lifetime Value, delete the `%` character*\n", + "- *In vehicle class, \"Sports Car\", \"Luxury SUV\" and \"Luxury Car\" could be replaced by \"Luxury\"*" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "3f8ee5cb-50ab-48af-8a9f-9a389804033c", + "metadata": { + "id": "3f8ee5cb-50ab-48af-8a9f-9a389804033c" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "ea0816a7-a18e-4d4c-b667-a8452a800bd1", - "metadata": { - "id": "ea0816a7-a18e-4d4c-b667-a8452a800bd1" - }, - "source": [ - "Use the `.duplicated()` method to identify any duplicate rows in the dataframe.\n", - "\n", - "Decide on a strategy for handling the duplicates. Options include:\n", - "- Dropping all duplicate rows\n", - "- Keeping only the first occurrence of each duplicated row\n", - "- Keeping only the last occurrence of each duplicated row\n", - "- Dropping duplicates based on a subset of columns\n", - "- Dropping duplicates based on a specific column\n", - "\n", - "Implement your chosen strategy using the `drop_duplicates()` function.\n", - "\n", - "Verify that your strategy has successfully handled the duplicates by checking for duplicates again using `.duplicated()`.\n", - "\n", - "Remember to document your process and explain your reasoning for choosing a particular strategy for handling duplicates.\n", - "\n", - "Save the cleaned dataset to a new CSV file.\n", - "\n", - "*Hint*: *after dropping duplicates, reset the index to ensure consistency*." - ] - }, + "name": "stdout", + "output_type": "stream", + "text": [ + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n", + "| | customer | states | gender | education | customer_lifetime_value | income | monthly_premium_auto | number_of_open_complaints | policy_type | vehicle_class | total_claim_amount | vehicle_class |\n", + "+====+============+============+==========+======================+===========================+==========+========================+=============================+================+=================+======================+==================+\n", + "| 0 | RB50392 | Washington | nan | Master | nan | 0 | 1000 | 1/0/00 | Personal Auto | Four-Door Car | 2.70493 | Four-Door Car |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n", + "| 1 | QZ44356 | Arizona | F | Bachelor | 697953.59% | 0 | 94 | 1/0/00 | Personal Auto | Four-Door Car | 1131.46 | Four-Door Car |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n", + "| 2 | AI49188 | Nevada | F | Bachelor | 1288743.17% | 48767 | 108 | 1/0/00 | Personal Auto | Two-Door Car | 566.472 | Two-Door Car |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n", + "| 3 | WW63253 | California | M | Bachelor | 764586.18% | 0 | 106 | 1/0/00 | Corporate Auto | SUV | 529.881 | SUV |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n", + "| 4 | GA49547 | Washington | M | High School or Below | 536307.65% | 36357 | 68 | 1/0/00 | Personal Auto | Four-Door Car | 17.2693 | Four-Door Car |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+\n" + ] + } + ], + "source": [ + "# Standardize the Gender column\n", + "Data_Challenge['gender'] = Data_Challenge['gender'].replace({\n", + " 'F': 'F',\n", + " 'Femal': 'F',\n", + " 'Female': 'F',\n", + " 'female': 'F',\n", + " 'M': 'M',\n", + " 'Male': 'M',\n", + " 'male': 'M'\n", + "})\n", + "\n", + "Data_Challenge['states'] = Data_Challenge['states'].replace({\n", + " \n", + " \"AZ\": \"Arizona\", \n", + " \"Cali\": \"California\",\n", + " \"WA\": \"Washington\"\n", + " \n", + " \n", + " }) \n", + "\n", + "Data_Challenge['vehicle_class '] = Data_Challenge['vehicle_class'].replace({\n", + " \n", + " \"Sports Car\": \"Luxury\",\n", + " \"Luxury SUV\": \"Luxury\", \n", + " \"Luxury Car\": \"Luxury\", \n", + " }) \n", + "\n", + "Data_Challenge['education'] = Data_Challenge['education'].replace({\n", + " \n", + "\"Bachelors\": \"Bachelor\"\n", + " \n", + "}) \n", + "\n", + " \n", + "print(tabulate(Data_Challenge.head(), headers='keys', tablefmt='grid', colalign=[\"center\"] * len(Data_Challenge.columns)))\n" + ] + }, + { + "cell_type": "markdown", + "id": "85ff78ce-0174-4890-9db3-8048b7d7d2d0", + "metadata": { + "id": "85ff78ce-0174-4890-9db3-8048b7d7d2d0" + }, + "source": [ + "## Exercise 3: Formatting data types" + ] + }, + { + "cell_type": "markdown", + "id": "b91c2cf8-79a2-4baf-9f65-ff2fb22270bd", + "metadata": { + "id": "b91c2cf8-79a2-4baf-9f65-ff2fb22270bd" + }, + "source": [ + "The data types of many columns in the dataset appear to be incorrect. This could impact the accuracy of our analysis. To ensure accurate analysis, we need to correct the data types of these columns. Please update the data types of the columns as appropriate." + ] + }, + { + "cell_type": "markdown", + "id": "43e5d853-ff9e-43b2-9d92-aef2f78764f3", + "metadata": { + "id": "43e5d853-ff9e-43b2-9d92-aef2f78764f3" + }, + "source": [ + "It is important to note that this exercise does not involve handling null values (NaN or None)." + ] + }, + { + "cell_type": "markdown", + "id": "329ca691-9196-4419-8969-3596746237a1", + "metadata": { + "id": "329ca691-9196-4419-8969-3596746237a1" + }, + "source": [ + "*Hint*:\n", + "- *Customer lifetime value should be numeric*\n", + "- *Number of open complaints has an incorrect format. Look at the different values it takes with `unique()` and take the middle value. As an example, 1/5/00 should be 5. Number of open complaints is a string - remember you can use `split()` to deal with it and take the number you need. Finally, since it should be numeric, cast the column to be in its proper type.*" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "eb8f5991-73e9-405f-bf1c-6b7c589379a9", + "metadata": { + "id": "eb8f5991-73e9-405f-bf1c-6b7c589379a9" + }, + "outputs": [ { - "cell_type": "code", - "execution_count": null, - "id": "1929362c-47ed-47cb-baca-358b78d401a0", - "metadata": { - "id": "1929362c-47ed-47cb-baca-358b78d401a0" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, + "name": "stdout", + "output_type": "stream", + "text": [ + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n", + "| | customer | states | gender | education | customer_lifetime_value | income | monthly_premium_auto | number_of_open_complaints | policy_type | vehicle_class | total_claim_amount | vehicle_class | number_of_open_complaints |\n", + "+====+============+============+==========+======================+===========================+==========+========================+=============================+================+=================+======================+==================+==============================+\n", + "| 0 | RB50392 | Washington | nan | Master | nan | 0 | 1000 | 0 | Personal Auto | Four-Door Car | 2.70493 | Four-Door Car | 0 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n", + "| 1 | QZ44356 | Arizona | F | Bachelor | nan | 0 | 94 | 0 | Personal Auto | Four-Door Car | 1131.46 | Four-Door Car | 0 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n", + "| 2 | AI49188 | Nevada | F | Bachelor | nan | 48767 | 108 | 0 | Personal Auto | Two-Door Car | 566.472 | Two-Door Car | 0 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n", + "| 3 | WW63253 | California | M | Bachelor | nan | 0 | 106 | 0 | Corporate Auto | SUV | 529.881 | SUV | 0 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n", + "| 4 | GA49547 | Washington | M | High School or Below | nan | 36357 | 68 | 0 | Personal Auto | Four-Door Car | 17.2693 | Four-Door Car | 0 |\n", + "+----+------------+------------+----------+----------------------+---------------------------+----------+------------------------+-----------------------------+----------------+-----------------+----------------------+------------------+------------------------------+\n" + ] + } + ], + "source": [ + "# Remove the '%' character and convert to numeric\n", + "Data_Challenge['customer_lifetime_value'] = pd.to_numeric(\n", + " Data_Challenge['customer_lifetime_value'].replace('%', '', regex=False),\n", + " errors='coerce'\n", + ")\n", + "\n", + "# Extract the middle number from the string and convert the column to numeric\n", + "Data_Challenge['number_of_open_complaints'] = Data_Challenge['number_of_open_complaints'].apply(\n", + " lambda x: x.split('/')[1] if isinstance(x, str) and '/' in x else x\n", + ")\n", + "\n", + "# Convert the column to numeric\n", + "Data_Challenge['number_of_open_complaints '] = pd.to_numeric(Data_Challenge['number_of_open_complaints'], errors='coerce')\n", + "\n", + "print(tabulate(Data_Challenge.head(), headers='keys', tablefmt='grid', colalign=[\"center\"] * len(Data_Challenge.columns)))" + ] + }, + { + "cell_type": "markdown", + "id": "14c52e28-2d0c-4dd2-8bd5-3476e34fadc1", + "metadata": { + "id": "14c52e28-2d0c-4dd2-8bd5-3476e34fadc1" + }, + "source": [ + "## Exercise 4: Dealing with Null values" + ] + }, + { + "cell_type": "markdown", + "id": "34b9a20f-7d32-4417-975e-1b4dfb0e16cd", + "metadata": { + "id": "34b9a20f-7d32-4417-975e-1b4dfb0e16cd" + }, + "source": [ + "Identify any columns with null or missing values. Identify how many null values each column has. You can use the `isnull()` function in pandas to find columns with null values.\n", + "\n", + "Decide on a strategy for handling the null values. There are several options, including:\n", + "\n", + "- Drop the rows or columns with null values\n", + "- Fill the null values with a specific value (such as the column mean or median for numerical variables, and mode for categorical variables)\n", + "- Fill the null values with the previous or next value in the column\n", + "- Fill the null values based on a more complex algorithm or model (note: we haven't covered this yet)\n", + "\n", + "Implement your chosen strategy to handle the null values. You can use the `fillna()` function in pandas to fill null values or `dropna()` function to drop null values.\n", + "\n", + "Verify that your strategy has successfully handled the null values. You can use the `isnull()` function again to check if there are still null values in the dataset.\n", + "\n", + "Remember to document your process and explain your reasoning for choosing a particular strategy for handling null values.\n", + "\n", + "After formatting data types, as a last step, convert all the numeric variables to integers." + ] + }, + { + "cell_type": "code", + "execution_count": 67, + "id": "f184fc35-7831-4836-a0a5-e7f99e01b40e", + "metadata": { + "id": "f184fc35-7831-4836-a0a5-e7f99e01b40e" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "60840701-4783-40e2-b4d8-55303f9100c9", - "metadata": { - "id": "60840701-4783-40e2-b4d8-55303f9100c9" - }, - "source": [ - "# Bonus: Challenge 2: creating functions on a separate `py` file" - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "Null values in each column:\n", + "customer 0\n", + "states 0\n", + "gender 0\n", + "education 0\n", + "customer_lifetime_value 0\n", + "income 0\n", + "monthly_premium_auto 0\n", + "number_of_open_complaints 0\n", + "policy_type 0\n", + "vehicle_class 0\n", + "total_claim_amount 0\n", + "vehicle_class 0\n", + "number_of_open_complaints 0\n", + "dtype: int64\n" + ] }, { - "cell_type": "markdown", - "id": "9d1adb3a-17cf-4899-8041-da21a4337fb4", - "metadata": { - "id": "9d1adb3a-17cf-4899-8041-da21a4337fb4" - }, - "source": [ - "Put all the data cleaning and formatting steps into functions, and create a main function that performs all the cleaning and formatting.\n", - "\n", - "Write these functions in separate .py file(s). By putting these steps into functions, we can make the code more modular and easier to maintain." - ] - }, + "ename": "IndexError", + "evalue": "single positional indexer is out-of-bounds", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mIndexError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[67], line 20\u001b[0m\n\u001b[0;32m 18\u001b[0m \u001b[38;5;66;03m# Fill missing values for categorical columns with the mode (most frequent value)\u001b[39;00m\n\u001b[0;32m 19\u001b[0m categorical_columns \u001b[38;5;241m=\u001b[39m Data_Challenge\u001b[38;5;241m.\u001b[39mselect_dtypes(include\u001b[38;5;241m=\u001b[39m[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mobject\u001b[39m\u001b[38;5;124m'\u001b[39m])\u001b[38;5;241m.\u001b[39mcolumns\n\u001b[1;32m---> 20\u001b[0m Data_Challenge[categorical_columns] \u001b[38;5;241m=\u001b[39m Data_Challenge[categorical_columns]\u001b[38;5;241m.\u001b[39mfillna(\u001b[43mData_Challenge\u001b[49m\u001b[43m[\u001b[49m\u001b[43mcategorical_columns\u001b[49m\u001b[43m]\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmode\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43miloc\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m)\n\u001b[0;32m 22\u001b[0m \u001b[38;5;66;03m# Step 3: Verify that null values have been handled\u001b[39;00m\n\u001b[0;32m 23\u001b[0m null_counts_after \u001b[38;5;241m=\u001b[39m Data_Challenge\u001b[38;5;241m.\u001b[39misnull()\u001b[38;5;241m.\u001b[39msum()\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\indexing.py:1191\u001b[0m, in \u001b[0;36m_LocationIndexer.__getitem__\u001b[1;34m(self, key)\u001b[0m\n\u001b[0;32m 1189\u001b[0m maybe_callable \u001b[38;5;241m=\u001b[39m com\u001b[38;5;241m.\u001b[39mapply_if_callable(key, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj)\n\u001b[0;32m 1190\u001b[0m maybe_callable \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_check_deprecated_callable_usage(key, maybe_callable)\n\u001b[1;32m-> 1191\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_getitem_axis\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmaybe_callable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43maxis\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43maxis\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\indexing.py:1752\u001b[0m, in \u001b[0;36m_iLocIndexer._getitem_axis\u001b[1;34m(self, key, axis)\u001b[0m\n\u001b[0;32m 1749\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCannot index by location index with a non-integer key\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 1751\u001b[0m \u001b[38;5;66;03m# validate the location\u001b[39;00m\n\u001b[1;32m-> 1752\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_validate_integer\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43maxis\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1754\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj\u001b[38;5;241m.\u001b[39m_ixs(key, axis\u001b[38;5;241m=\u001b[39maxis)\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\indexing.py:1685\u001b[0m, in \u001b[0;36m_iLocIndexer._validate_integer\u001b[1;34m(self, key, axis)\u001b[0m\n\u001b[0;32m 1683\u001b[0m len_axis \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj\u001b[38;5;241m.\u001b[39m_get_axis(axis))\n\u001b[0;32m 1684\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m key \u001b[38;5;241m>\u001b[39m\u001b[38;5;241m=\u001b[39m len_axis \u001b[38;5;129;01mor\u001b[39;00m key \u001b[38;5;241m<\u001b[39m \u001b[38;5;241m-\u001b[39mlen_axis:\n\u001b[1;32m-> 1685\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mIndexError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msingle positional indexer is out-of-bounds\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n", + "\u001b[1;31mIndexError\u001b[0m: single positional indexer is out-of-bounds" + ] + } + ], + "source": [ + "import pandas as pd\n", + "\n", + "# Assuming Data_Challenge is already loaded\n", + "\n", + "# Step 1: Identify columns with null values\n", + "null_counts = Data_Challenge.isnull().sum()\n", + "print(\"Null values in each column:\")\n", + "print(null_counts)\n", + "\n", + "# Step 2: Choose strategy for handling null values\n", + "# Let's assume filling the null values with the mode for categorical columns\n", + "# and the median for numerical columns (if applicable)\n", + "\n", + "# Fill missing values for numerical columns with the median\n", + "numerical_columns = Data_Challenge.select_dtypes(include=['number']).columns\n", + "Data_Challenge[numerical_columns] = Data_Challenge[numerical_columns].fillna(Data_Challenge[numerical_columns].median())\n", + "\n", + "# Fill missing values for categorical columns with the mode (most frequent value)\n", + "categorical_columns = Data_Challenge.select_dtypes(include=['object']).columns\n", + "Data_Challenge[categorical_columns] = Data_Challenge[categorical_columns].fillna(Data_Challenge[categorical_columns].mode().iloc[0])\n", + "\n", + "# Step 3: Verify that null values have been handled\n", + "null_counts_after = Data_Challenge.isnull().sum()\n", + "print(\"\\nNull values after filling:\")\n", + "print(null_counts_after)\n", + "\n", + "# Step 4: Convert all numeric columns to integers\n", + "# This will convert all numeric columns to integers\n", + "Data_Challenge[numerical_columns] = Data_Challenge[numerical_columns].astype(int)\n", + "\n", + "# Verify the final data types of the columns\n", + "print(\"\\nData types after conversion:\")\n", + "print(Data_Challenge.dtypes)\n", + "\n", + "# Display the first few rows of the cleaned dataset\n", + "from tabulate import tabulate\n", + "print(\"\\nCleaned Data (first few rows):\")\n", + "print(tabulate(Data_Challenge.head(), headers='keys', tablefmt='grid', colalign=[\"center\"] * len(Data_Challenge.columns)))\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "98416351-e999-4156-9834-9b00a311adfa", + "metadata": { + "id": "98416351-e999-4156-9834-9b00a311adfa" + }, + "source": [ + "## Exercise 5: Dealing with duplicates" + ] + }, + { + "cell_type": "markdown", + "id": "ea0816a7-a18e-4d4c-b667-a8452a800bd1", + "metadata": { + "id": "ea0816a7-a18e-4d4c-b667-a8452a800bd1" + }, + "source": [ + "Use the `.duplicated()` method to identify any duplicate rows in the dataframe.\n", + "\n", + "Decide on a strategy for handling the duplicates. Options include:\n", + "- Dropping all duplicate rows\n", + "- Keeping only the first occurrence of each duplicated row\n", + "- Keeping only the last occurrence of each duplicated row\n", + "- Dropping duplicates based on a subset of columns\n", + "- Dropping duplicates based on a specific column\n", + "\n", + "Implement your chosen strategy using the `drop_duplicates()` function.\n", + "\n", + "Verify that your strategy has successfully handled the duplicates by checking for duplicates again using `.duplicated()`.\n", + "\n", + "Remember to document your process and explain your reasoning for choosing a particular strategy for handling duplicates.\n", + "\n", + "Save the cleaned dataset to a new CSV file.\n", + "\n", + "*Hint*: *after dropping duplicates, reset the index to ensure consistency*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1929362c-47ed-47cb-baca-358b78d401a0", + "metadata": { + "id": "1929362c-47ed-47cb-baca-358b78d401a0" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "0e170dc2-b62c-417a-8248-e63ed18a70c4", - "metadata": { - "id": "0e170dc2-b62c-417a-8248-e63ed18a70c4" - }, - "source": [ - "*Hint: autoreload module is a utility module in Python that allows you to automatically reload modules in the current session when changes are made to the source code. This can be useful in situations where you are actively developing code and want to see the effects of changes you make without having to constantly restart the Python interpreter or Jupyter Notebook kernel.*" - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "Columns in Data_Challenge: ['customer', 'states', 'gender', 'education', 'customer_lifetime_value', 'income', 'monthly_premium_auto', 'number_of_open_complaints', 'policy_type', 'vehicle_class', 'total_claim_amount', 'vehicle_class ', 'number_of_open_complaints ']\n", + "Number of columns: 13\n" + ] }, { - "cell_type": "code", - "execution_count": null, - "id": "a52c6dfc-cd11-4d01-bda4-f719fa33e9a4", - "metadata": { - "id": "a52c6dfc-cd11-4d01-bda4-f719fa33e9a4" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - }, + "ename": "IndexError", + "evalue": "list assignment index out of range", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mIndexError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[65], line 22\u001b[0m\n\u001b[0;32m 19\u001b[0m colalign \u001b[38;5;241m=\u001b[39m [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcenter\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m*\u001b[39m \u001b[38;5;28mlen\u001b[39m(Data_Challenge\u001b[38;5;241m.\u001b[39mcolumns)\n\u001b[0;32m 21\u001b[0m \u001b[38;5;66;03m# Display the cleaned data (first few rows)\u001b[39;00m\n\u001b[1;32m---> 22\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mtabulate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mData_Challenge\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhead\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mkeys\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtablefmt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mgrid\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcolalign\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcolalign\u001b[49m\u001b[43m)\u001b[49m)\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\tabulate\\__init__.py:2165\u001b[0m, in \u001b[0;36mtabulate\u001b[1;34m(tabular_data, headers, tablefmt, floatfmt, intfmt, numalign, stralign, missingval, showindex, disable_numparse, colalign, maxcolwidths, rowalign, maxheadercolwidths)\u001b[0m\n\u001b[0;32m 2163\u001b[0m \u001b[38;5;28;01massert\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(colalign, Iterable)\n\u001b[0;32m 2164\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m idx, align \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28menumerate\u001b[39m(colalign):\n\u001b[1;32m-> 2165\u001b[0m \u001b[43maligns\u001b[49m\u001b[43m[\u001b[49m\u001b[43midx\u001b[49m\u001b[43m]\u001b[49m \u001b[38;5;241m=\u001b[39m align\n\u001b[0;32m 2166\u001b[0m minwidths \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m 2167\u001b[0m [width_fn(h) \u001b[38;5;241m+\u001b[39m min_padding \u001b[38;5;28;01mfor\u001b[39;00m h \u001b[38;5;129;01min\u001b[39;00m headers] \u001b[38;5;28;01mif\u001b[39;00m headers \u001b[38;5;28;01melse\u001b[39;00m [\u001b[38;5;241m0\u001b[39m] \u001b[38;5;241m*\u001b[39m \u001b[38;5;28mlen\u001b[39m(cols)\n\u001b[0;32m 2168\u001b[0m )\n\u001b[0;32m 2169\u001b[0m cols \u001b[38;5;241m=\u001b[39m [\n\u001b[0;32m 2170\u001b[0m _align_column(c, a, minw, has_invisible, enable_widechars, is_multiline)\n\u001b[0;32m 2171\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m c, a, minw \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mzip\u001b[39m(cols, aligns, minwidths)\n\u001b[0;32m 2172\u001b[0m ]\n", + "\u001b[1;31mIndexError\u001b[0m: list assignment index out of range" + ] + } + ], + "source": [ + "# Remove duplicate columns (if any)\n", + "Data_Challenge = Data_Challenge.loc[:, ~Data_Challenge.columns.duplicated()]\n", + "\n", + "# Fill missing values in all columns (use the most frequent value, i.e., mode, for non-numerical data)\n", + "Data_Challenge = Data_Challenge.apply(lambda col: col.fillna(col.mode().iloc[0]) if col.dtype == 'O' else col)\n", + "\n", + "# Replace infinite values with NaN (if they exist)\n", + "Data_Challenge = Data_Challenge.replace([float('inf'), float('-inf')], pd.NA)\n", + "\n", + "# Drop rows with any remaining NaN values\n", + "Data_Challenge = Data_Challenge.dropna()\n", + "\n", + "# Check the DataFrame structure and columns\n", + "print(f\"Columns in Data_Challenge: {Data_Challenge.columns.tolist()}\")\n", + "print(f\"Number of columns: {len(Data_Challenge.columns)}\")\n", + "\n", + "# Adjust column alignment based on the number of columns in Data_Challenge\n", + "# Ensure the number of colalign elements matches the number of columns\n", + "colalign = [\"center\"] * len(Data_Challenge.columns)\n", + "\n", + "# Display the cleaned data (first few rows)\n", + "print(tabulate(Data_Challenge.head(), headers='keys', tablefmt='grid', colalign=colalign))\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "60840701-4783-40e2-b4d8-55303f9100c9", + "metadata": { + "id": "60840701-4783-40e2-b4d8-55303f9100c9" + }, + "source": [ + "# Bonus: Challenge 2: creating functions on a separate `py` file" + ] + }, + { + "cell_type": "markdown", + "id": "9d1adb3a-17cf-4899-8041-da21a4337fb4", + "metadata": { + "id": "9d1adb3a-17cf-4899-8041-da21a4337fb4" + }, + "source": [ + "Put all the data cleaning and formatting steps into functions, and create a main function that performs all the cleaning and formatting.\n", + "\n", + "Write these functions in separate .py file(s). By putting these steps into functions, we can make the code more modular and easier to maintain." + ] + }, + { + "cell_type": "markdown", + "id": "0e170dc2-b62c-417a-8248-e63ed18a70c4", + "metadata": { + "id": "0e170dc2-b62c-417a-8248-e63ed18a70c4" + }, + "source": [ + "*Hint: autoreload module is a utility module in Python that allows you to automatically reload modules in the current session when changes are made to the source code. This can be useful in situations where you are actively developing code and want to see the effects of changes you make without having to constantly restart the Python interpreter or Jupyter Notebook kernel.*" + ] + }, + { + "cell_type": "code", + "execution_count": 82, + "id": "a52c6dfc-cd11-4d01-bda4-f719fa33e9a4", + "metadata": { + "id": "a52c6dfc-cd11-4d01-bda4-f719fa33e9a4" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "80f846bb-3f5e-4ca2-96c0-900728daca5a", - "metadata": { - "tags": [], - "id": "80f846bb-3f5e-4ca2-96c0-900728daca5a" - }, - "source": [ - "# Bonus: Challenge 3: Analyzing Clean and Formated Data" - ] - }, + "ename": "SyntaxError", + "evalue": "invalid syntax (786702132.py, line 12)", + "output_type": "error", + "traceback": [ + "\u001b[1;36m Cell \u001b[1;32mIn[82], line 12\u001b[1;36m\u001b[0m\n\u001b[1;33m Data_Challenge fill_missing_values(Data_Challenge):\u001b[0m\n\u001b[1;37m ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m invalid syntax\n" + ] + } + ], + "source": [ + "# clean_data.py\n", + "\n", + "import pandas as pd\n", + "\n", + "Data_Challenge = identify_null_values(Data_Challenge)\n", + " \n", + " #Identify columns with null values and count the number of null values in each column.\n", + " \n", + "null_counts = Data_Challenge.isnull().sum()\n", + "return null_counts\n", + "\n", + "Data_Challenge fill_missing_values(Data_Challenge):\n", + " \n", + " #Fill missing values for numerical columns with median and categorical columns with mode.\n", + " \n", + " # Fill missing values for numerical columns with the median\n", + "numerical_columns = Data_Challenge.select_dtypes(include=['number']).columns\n", + "Data_Challenge[numerical_columns] = Data_Challenge[numerical_columns].fillna(Data_Challenge[numerical_columns].median())\n", + " \n", + " # Fill missing values for categorical columns with the mode\n", + "categorical_columns = Data_Challenge.select_dtypes(include=['object']).columns\n", + "Data_Challenge[categorical_columns] = Data_Challenge[categorical_columns].fillna(Data_Challenge[categorical_columns].mode().iloc[0])\n", + "\n", + "return Data_Challenge\n", + "\n", + "Data_Challenge convert_data_types(Data_Challenge):\n", + " \n", + " #Convert numerical columns to integers.\n", + " \n", + "numerical_columns = Data_Challenge.select_dtypes(include=['number']).columns\n", + "Data_Challenge[numerical_columns] = Data_Challenge[numerical_columns].astype(int)\n", + "return Data_Challenge\n", + "\n", + "Data_Challenge clean_data(Data_Challenge):\n", + " \n", + " #Perform all data cleaning steps.\n", + " \n", + "print(\"Identifying null values...\")\n", + "null_counts = identify_null_values(Data_Challenge)\n", + "print(\"Null values in each column:\\n\", null_counts)\n", + " \n", + "print(\"\\nFilling missing values...\")\n", + "Data_Challenge = fill_missing_values(Data_Challenge)\n", + " \n", + "print(\"\\nConverting data types...\")\n", + "Data_Challenge = convert_data_types(Data_Challenge)\n", + " \n", + "print(\"\\nData cleaning completed!\")\n", + "return Data_Challenge\n" + ] + }, + { + "cell_type": "markdown", + "id": "80f846bb-3f5e-4ca2-96c0-900728daca5a", + "metadata": { + "id": "80f846bb-3f5e-4ca2-96c0-900728daca5a", + "tags": [] + }, + "source": [ + "# Bonus: Challenge 3: Analyzing Clean and Formated Data" + ] + }, + { + "cell_type": "markdown", + "id": "9021630e-cc90-446c-b5bd-264d6c864207", + "metadata": { + "id": "9021630e-cc90-446c-b5bd-264d6c864207" + }, + "source": [ + "You have been tasked with analyzing the data to identify potential areas for improving customer retention and profitability. Your goal is to identify customers with a high policy claim amount and a low customer lifetime value.\n", + "\n", + "In the Pandas Lab, we only looked at high policy claim amounts because we couldn't look into low customer lifetime values. If we had tried to work with that column, we wouldn't have been able to because customer lifetime value wasn't clean and in its proper format. So after cleaning and formatting the data, let's get some more interesting insights!\n", + "\n", + "Instructions:\n", + "\n", + "- Review the statistics again for total claim amount and customer lifetime value to gain an understanding of the data.\n", + "- To identify potential areas for improving customer retention and profitability, we want to focus on customers with a high policy claim amount and a low customer lifetime value. Consider customers with a high policy claim amount to be those in the top 25% of the total claim amount, and clients with a low customer lifetime value to be those in the bottom 25% of the customer lifetime value. Create a pandas DataFrame object that contains information about customers with a policy claim amount greater than the 75th percentile and a customer lifetime value in the bottom 25th percentile.\n", + "- Use DataFrame methods to calculate summary statistics about the high policy claim amount and low customer lifetime value data. To do so, select both columns of the dataframe simultaneously and pass it to the `.describe()` method. This will give you descriptive statistics, such as mean, median, standard deviation, minimum and maximum values for both columns at the same time, allowing you to compare and analyze their characteristics." + ] + }, + { + "cell_type": "code", + "execution_count": 85, + "id": "211e82b5-461a-4d6f-8a23-4deccb84173c", + "metadata": { + "id": "211e82b5-461a-4d6f-8a23-4deccb84173c" + }, + "outputs": [ { - "cell_type": "markdown", - "id": "9021630e-cc90-446c-b5bd-264d6c864207", - "metadata": { - "id": "9021630e-cc90-446c-b5bd-264d6c864207" - }, - "source": [ - "You have been tasked with analyzing the data to identify potential areas for improving customer retention and profitability. Your goal is to identify customers with a high policy claim amount and a low customer lifetime value.\n", - "\n", - "In the Pandas Lab, we only looked at high policy claim amounts because we couldn't look into low customer lifetime values. If we had tried to work with that column, we wouldn't have been able to because customer lifetime value wasn't clean and in its proper format. So after cleaning and formatting the data, let's get some more interesting insights!\n", - "\n", - "Instructions:\n", - "\n", - "- Review the statistics again for total claim amount and customer lifetime value to gain an understanding of the data.\n", - "- To identify potential areas for improving customer retention and profitability, we want to focus on customers with a high policy claim amount and a low customer lifetime value. Consider customers with a high policy claim amount to be those in the top 25% of the total claim amount, and clients with a low customer lifetime value to be those in the bottom 25% of the customer lifetime value. Create a pandas DataFrame object that contains information about customers with a policy claim amount greater than the 75th percentile and a customer lifetime value in the bottom 25th percentile.\n", - "- Use DataFrame methods to calculate summary statistics about the high policy claim amount and low customer lifetime value data. To do so, select both columns of the dataframe simultaneously and pass it to the `.describe()` method. This will give you descriptive statistics, such as mean, median, standard deviation, minimum and maximum values for both columns at the same time, allowing you to compare and analyze their characteristics." - ] + "name": "stdout", + "output_type": "stream", + "text": [ + "Summary Statistics for Policy Claim Amount:\n" + ] }, { - "cell_type": "code", - "execution_count": null, - "id": "211e82b5-461a-4d6f-8a23-4deccb84173c", - "metadata": { - "id": "211e82b5-461a-4d6f-8a23-4deccb84173c" - }, - "outputs": [], - "source": [ - "# Your code here" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.13" - }, - "colab": { - "provenance": [] + "ename": "KeyError", + "evalue": "'PolicyClaimAmount'", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mKeyError\u001b[0m Traceback (most recent call last)", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\indexes\\base.py:3805\u001b[0m, in \u001b[0;36mIndex.get_loc\u001b[1;34m(self, key)\u001b[0m\n\u001b[0;32m 3804\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m-> 3805\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_engine\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_loc\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcasted_key\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 3806\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m err:\n", + "File \u001b[1;32mindex.pyx:167\u001b[0m, in \u001b[0;36mpandas._libs.index.IndexEngine.get_loc\u001b[1;34m()\u001b[0m\n", + "File \u001b[1;32mindex.pyx:196\u001b[0m, in \u001b[0;36mpandas._libs.index.IndexEngine.get_loc\u001b[1;34m()\u001b[0m\n", + "File \u001b[1;32mpandas\\\\_libs\\\\hashtable_class_helper.pxi:7081\u001b[0m, in \u001b[0;36mpandas._libs.hashtable.PyObjectHashTable.get_item\u001b[1;34m()\u001b[0m\n", + "File \u001b[1;32mpandas\\\\_libs\\\\hashtable_class_helper.pxi:7089\u001b[0m, in \u001b[0;36mpandas._libs.hashtable.PyObjectHashTable.get_item\u001b[1;34m()\u001b[0m\n", + "\u001b[1;31mKeyError\u001b[0m: 'PolicyClaimAmount'", + "\nThe above exception was the direct cause of the following exception:\n", + "\u001b[1;31mKeyError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[85], line 7\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[38;5;66;03m# Display summary statistics\u001b[39;00m\n\u001b[0;32m 6\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mSummary Statistics for Policy Claim Amount:\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m----> 7\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mData_Challenge\u001b[49m\u001b[43m[\u001b[49m\u001b[43mpolicy_claim_column\u001b[49m\u001b[43m]\u001b[49m\u001b[38;5;241m.\u001b[39mdescribe())\n\u001b[0;32m 9\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124mSummary Statistics for Customer Lifetime Value:\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 10\u001b[0m \u001b[38;5;28mprint\u001b[39m(Data_Challenge[customer_lifetime_value_column]\u001b[38;5;241m.\u001b[39mdescribe())\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\frame.py:4102\u001b[0m, in \u001b[0;36mDataFrame.__getitem__\u001b[1;34m(self, key)\u001b[0m\n\u001b[0;32m 4100\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mcolumns\u001b[38;5;241m.\u001b[39mnlevels \u001b[38;5;241m>\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[0;32m 4101\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_getitem_multilevel(key)\n\u001b[1;32m-> 4102\u001b[0m indexer \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcolumns\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_loc\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 4103\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m is_integer(indexer):\n\u001b[0;32m 4104\u001b[0m indexer \u001b[38;5;241m=\u001b[39m [indexer]\n", + "File \u001b[1;32mc:\\Users\\cleid\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pandas\\core\\indexes\\base.py:3812\u001b[0m, in \u001b[0;36mIndex.get_loc\u001b[1;34m(self, key)\u001b[0m\n\u001b[0;32m 3807\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(casted_key, \u001b[38;5;28mslice\u001b[39m) \u001b[38;5;129;01mor\u001b[39;00m (\n\u001b[0;32m 3808\u001b[0m \u001b[38;5;28misinstance\u001b[39m(casted_key, abc\u001b[38;5;241m.\u001b[39mIterable)\n\u001b[0;32m 3809\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;28many\u001b[39m(\u001b[38;5;28misinstance\u001b[39m(x, \u001b[38;5;28mslice\u001b[39m) \u001b[38;5;28;01mfor\u001b[39;00m x \u001b[38;5;129;01min\u001b[39;00m casted_key)\n\u001b[0;32m 3810\u001b[0m ):\n\u001b[0;32m 3811\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m InvalidIndexError(key)\n\u001b[1;32m-> 3812\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m(key) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01merr\u001b[39;00m\n\u001b[0;32m 3813\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m:\n\u001b[0;32m 3814\u001b[0m \u001b[38;5;66;03m# If we have a listlike key, _check_indexing_error will raise\u001b[39;00m\n\u001b[0;32m 3815\u001b[0m \u001b[38;5;66;03m# InvalidIndexError. Otherwise we fall through and re-raise\u001b[39;00m\n\u001b[0;32m 3816\u001b[0m \u001b[38;5;66;03m# the TypeError.\u001b[39;00m\n\u001b[0;32m 3817\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_check_indexing_error(key)\n", + "\u001b[1;31mKeyError\u001b[0m: 'PolicyClaimAmount'" + ] } + ], + "source": [ + "# Step 1: Review summary statistics for both columns\n", + "policy_claim_column = 'PolicyClaimAmount' # Adjust column name as needed\n", + "customer_lifetime_value_column = 'CustomerLifetimeValue' # Adjust column name as needed\n", + "\n", + "# Display summary statistics\n", + "print(\"Summary Statistics for Policy Claim Amount:\")\n", + "print(Data_Challenge[policy_claim_column].describe())\n", + "\n", + "print(\"\\nSummary Statistics for Customer Lifetime Value:\")\n", + "print(Data_Challenge[customer_lifetime_value_column].describe())\n", + "\n", + "# Step 2: Calculate the 75th percentile of Policy Claim Amount\n", + "high_claim_threshold = Data_Challenge[policy_claim_column].quantile(0.75)\n", + "\n", + "# Step 3: Calculate the 25th percentile of Customer Lifetime Value\n", + "low_clv_threshold = Data_Challenge[customer_lifetime_value_column].quantile(0.25)\n", + "\n", + "# Step 4: Filter the DataFrame to find customers with high claim and low CLV\n", + "filtered_df = Data_Challenge[(Data_Challenge[policy_claim_column] > high_claim_threshold) & \n", + " (Data_Challenge[customer_lifetime_value_column] < low_clv_threshold)]\n", + "\n", + "# Step 5: Get summary statistics for the filtered DataFrame\n", + "print(\"\\nSummary Statistics for High Claim Amount and Low Customer Lifetime Value:\")\n", + "print(filtered_df[[policy_claim_column, customer_lifetime_value_column]].describe())\n" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" }, - "nbformat": 4, - "nbformat_minor": 5 + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 }