Skip to content

Commit

Permalink
Merge pull request #574 from UBC-DSCI/bug-hunt
Browse files Browse the repository at this point in the history
Bug Hunt
  • Loading branch information
trevorcampbell authored Dec 23, 2023
2 parents 500904a + c7697f0 commit db6e782
Show file tree
Hide file tree
Showing 43 changed files with 37,177 additions and 12,426 deletions.
2,198 changes: 1,121 additions & 1,077 deletions img/classification2/ML-paradigm-test.ai

Large diffs are not rendered by default.

Binary file modified img/classification2/ML-paradigm-test.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6,044 changes: 6,044 additions & 0 deletions img/frontmatter/chapter_overview.ai

Large diffs are not rendered by default.

3,270 changes: 3,270 additions & 0 deletions img/inference/population_vs_sample.ai

Large diffs are not rendered by default.

1,789 changes: 1,789 additions & 0 deletions img/intro/intro-all.ai

Large diffs are not rendered by default.

5,901 changes: 5,901 additions & 0 deletions img/reading/filesystem.ai

Large diffs are not rendered by default.

Binary file modified img/version-control/generate-pat_02.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc-ba2-add.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc-ba3-commit.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc1-no-changes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc2-changes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc5-push.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc6-remote-changes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/version-control/vc7-pull.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5,564 changes: 5,564 additions & 0 deletions img/version-control/version-control-all.ai

Large diffs are not rendered by default.

Binary file removed img/viz/ggplot_function_scatter.jpeg
Binary file not shown.
1,771 changes: 923 additions & 848 deletions img/wrangling/data_frame_slides_cdn.004.ai

Large diffs are not rendered by default.

1,627 changes: 807 additions & 820 deletions img/wrangling/data_frame_slides_cdn.005.ai

Large diffs are not rendered by default.

1,407 changes: 698 additions & 709 deletions img/wrangling/data_frame_slides_cdn.007.ai

Large diffs are not rendered by default.

1,421 changes: 715 additions & 706 deletions img/wrangling/data_frame_slides_cdn.008.ai

Large diffs are not rendered by default.

1,728 changes: 900 additions & 828 deletions img/wrangling/data_frame_slides_cdn.009.ai

Large diffs are not rendered by default.

3,135 changes: 1,620 additions & 1,515 deletions img/wrangling/pivot_functions.001.ai

Large diffs are not rendered by default.

Binary file modified img/wrangling/pivot_functions.001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3,089 changes: 1,576 additions & 1,513 deletions img/wrangling/pivot_functions.002.ai

Large diffs are not rendered by default.

Binary file modified img/wrangling/pivot_functions.002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2,772 changes: 1,398 additions & 1,374 deletions img/wrangling/pivot_functions.003.ai

Large diffs are not rendered by default.

2,329 changes: 1,180 additions & 1,149 deletions img/wrangling/pivot_functions.004.ai

Large diffs are not rendered by default.

1,791 changes: 858 additions & 933 deletions img/wrangling/summarize.004.ai

Large diffs are not rendered by default.

1,792 changes: 922 additions & 870 deletions img/wrangling/tidy_data.001.ai

Large diffs are not rendered by default.

1,820 changes: 1,820 additions & 0 deletions img/wrangling/wrangling-syntax-all.ai

Large diffs are not rendered by default.

20 changes: 10 additions & 10 deletions source/classification1.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,7 @@ Figure \@ref(fig:05-knn-1).
perim_concav_with_new_point <- bind_rows(cancer,
tibble(Perimeter = new_point[1],
Concavity = new_point[2],
Class = "unknown")) |>
Class = "Unknown")) |>
ggplot(aes(x = Perimeter,
y = Concavity,
color = Class,
Expand Down Expand Up @@ -379,7 +379,7 @@ not, if you consider the other nearby points.
perim_concav_with_new_point2 <- bind_rows(cancer,
tibble(Perimeter = new_point[1],
Concavity = new_point[2],
Class = "unknown")) |>
Class = "Unknown")) |>
ggplot(aes(x = Perimeter,
y = Concavity,
color = Class,
Expand Down Expand Up @@ -466,7 +466,7 @@ In order to find the $K=5$ nearest neighbors, we will use the `slice_min` functi
perim_concav <- bind_rows(cancer,
tibble(Perimeter = new_point[1],
Concavity = new_point[2],
Class = "unknown")) |>
Class = "Unknown")) |>
ggplot(aes(x = Perimeter,
y = Concavity,
color = Class,
Expand Down Expand Up @@ -945,12 +945,12 @@ Standardizing your data should be a part of the preprocessing you do
before predictive modeling and you should always think carefully about your problem domain and
whether you need to standardize your data.

```{r 05-scaling-plt, echo = FALSE, fig.height = 4, fig.align = "center", fig.cap = "Comparison of K = 3 nearest neighbors with standardized and unstandardized data."}
```{r 05-scaling-plt, echo = FALSE, fig.height = 4, fig.align = "center", fig.cap = "Comparison of K = 3 nearest neighbors with unstandardized and standardized data."}
attrs <- c("Area", "Smoothness")
# create a new obs and get its NNs
new_obs <- tibble(Area = 400, Smoothness = 0.135, Class = "unknown")
new_obs <- tibble(Area = 400, Smoothness = 0.135, Class = "Unknown")
my_distances <- table_with_distances(unscaled_cancer[, attrs],
new_obs[, attrs])
neighbors <- unscaled_cancer[order(my_distances$Distance), ]
Expand Down Expand Up @@ -989,7 +989,7 @@ unscaled <- ggplot(unscaled_cancer, aes(x = Area,
), color = "black", linewidth = 0.5, show.legend = FALSE)
# create new scaled obs and get NNs
new_obs_scaled <- tibble(Area = -0.72, Smoothness = 2.8, Class = "unknown")
new_obs_scaled <- tibble(Area = -0.72, Smoothness = 2.8, Class = "Unknown")
my_distances_scaled <- table_with_distances(scaled_cancer[, attrs],
new_obs_scaled[, attrs])
neighbors_scaled <- scaled_cancer[order(my_distances_scaled$Distance), ]
Expand Down Expand Up @@ -1067,7 +1067,7 @@ ggplot(unscaled_cancer, aes(x = Area,
facet_zoom(x = ( Area > 380 & Area < 420) ,
y = (Smoothness > 0.08 & Smoothness < 0.14), zoom.size = 2) +
theme_bw() +
theme(text = element_text(size = 18), axis.title=element_text(size=18), legend.position="bottom")
theme(text = element_text(size = 13), axis.title=element_text(size=13), legend.position="bottom")
```

### Balancing
Expand Down Expand Up @@ -1141,7 +1141,7 @@ neighbors <- rare_cancer[order(my_distances$Distance), ]
rare_plot <- bind_rows(rare_cancer,
tibble(Perimeter = new_point[1],
Concavity = new_point[2],
Class = "unknown")) |>
Class = "Unknown")) |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class, shape = Class)) +
geom_point(alpha = 0.5) +
labs(color = "Diagnosis",
Expand Down Expand Up @@ -1175,8 +1175,8 @@ rare_plot + geom_point(aes(x = new_point[1], y = new_point[2]),
```

Figure \@ref(fig:05-upsample-2) shows what happens if we set the background color of
each area of the plot to the predictions the K-nearest neighbors
classifier would make. We can see that the decision is
each area of the plot to the prediction the K-nearest neighbors
classifier would make for a new observation at that location. We can see that the decision is
always "benign," corresponding to the blue color.

```{r 05-upsample-2, echo = FALSE, fig.height = 3.5, fig.width = 4.5, fig.align = "center", fig.cap = "Imbalanced data with background color indicating the decision of the classifier and the points represent the labeled data."}
Expand Down
34 changes: 19 additions & 15 deletions source/classification2.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ it classified 3 malignant observations as benign, and 4 benign observations as
malignant. The accuracy of this classifier is roughly
89%, given by the formula

$$\mathrm{accuracy} = \frac{\mathrm{number \; of \; correct \; predictions}}{\mathrm{total \; number \; of \; predictions}} = \frac{1+57}{1+57+4+3} = 0.892$$
$$\mathrm{accuracy} = \frac{\mathrm{number \; of \; correct \; predictions}}{\mathrm{total \; number \; of \; predictions}} = \frac{1+57}{1+57+4+3} = 0.892.$$

But we can also see that the classifier only identified 1 out of 4 total malignant
tumors; in other words, it misclassified 75% of the malignant cases present in the
Expand Down Expand Up @@ -245,7 +245,7 @@ Here, we pass in the number `1`.

```{r}
set.seed(1)
random_numbers1 <- sample(0:9, 10, replace=TRUE)
random_numbers1 <- sample(0:9, 10, replace = TRUE)
random_numbers1
```

Expand All @@ -255,7 +255,7 @@ we run the `sample` function again, we will
get a fresh batch of 10 numbers that also look random.

```{r}
random_numbers2 <- sample(0:9, 10, replace=TRUE)
random_numbers2 <- sample(0:9, 10, replace = TRUE)
random_numbers2
```

Expand All @@ -265,10 +265,10 @@ value.

```{r}
set.seed(1)
random_numbers1_again <- sample(0:9, 10, replace=TRUE)
random_numbers1_again <- sample(0:9, 10, replace = TRUE)
random_numbers1_again
random_numbers2_again <- sample(0:9, 10, replace=TRUE)
random_numbers2_again <- sample(0:9, 10, replace = TRUE)
random_numbers2_again
```

Expand All @@ -278,19 +278,19 @@ obtain a different sequence of random numbers.

```{r}
set.seed(4235)
random_numbers <- sample(0:9, 10, replace=TRUE)
random_numbers
random_numbers1_different <- sample(0:9, 10, replace = TRUE)
random_numbers1_different
random_numbers <- sample(0:9, 10, replace=TRUE)
random_numbers
random_numbers2_different <- sample(0:9, 10, replace = TRUE)
random_numbers2_different
```

In other words, even though the sequences of numbers that R is generating *look*
random, they are totally determined when we set a seed value!

So what does this mean for data analysis? Well, `sample` is certainly
not the only function that uses randomness in R. Many of the functions
that we use in `tidymodels`, `tidyverse`, and beyond use randomness&mdash;many of them
that we use in `tidymodels`, `tidyverse`, and beyond use randomness&mdash;some of them
without even telling you about it. So at the beginning of every data analysis you
do, right after loading packages, you should call the `set.seed` function and
pass it an integer that you pick.
Expand Down Expand Up @@ -512,10 +512,10 @@ cancer_acc_1 <- cancer_test_predictions |>
filter(.metric == 'accuracy')
cancer_prec_1 <- cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level="first")
precision(truth = Class, estimate = .pred_class, event_level = "first")
cancer_rec_1 <- cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level="first")
recall(truth = Class, estimate = .pred_class, event_level = "first")
```

In the metrics data frame, we filtered the `.metric` column since we are
Expand All @@ -537,12 +537,12 @@ If the labels were in the other order, we would instead use `event_level="second

```{r 06-precision}
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level="first")
precision(truth = Class, estimate = .pred_class, event_level = "first")
```

```{r 06-recall}
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level="first")
recall(truth = Class, estimate = .pred_class, event_level = "first")
```

The output shows that the estimated precision and recall of the classifier on the test data was
Expand Down Expand Up @@ -1400,6 +1400,7 @@ res <- tibble(ks = ks, accs = accs, fixedaccs = fixedaccs, nghbrs = nghbrs)
plt_irrelevant_accuracies <- ggplot(res) +
geom_line(mapping = aes(x=ks, y=accs)) +
geom_point(mapping = aes(x=ks, y=accs)) +
labs(x = "Number of Irrelevant Predictors",
y = "Model Accuracy Estimate") +
theme(text = element_text(size = 18), axis.title=element_text(size=18))
Expand All @@ -1420,9 +1421,10 @@ this evidence; if we fix the number of neighbors to $K=3$, the accuracy falls of

```{r 06-neighbors-irrelevant-features, echo = FALSE, warning = FALSE, fig.retina = 2, out.width = "65%", fig.align = "center", fig.cap = "Tuned number of neighbors for varying number of irrelevant predictors."}
plt_irrelevant_nghbrs <- ggplot(res) +
geom_point(mapping = aes(x=ks, y=nghbrs)) +
geom_line(mapping = aes(x=ks, y=nghbrs)) +
labs(x = "Number of Irrelevant Predictors",
y = "Number of neighbors") +
y = "Tuned number of neighbors") +
theme(text = element_text(size = 18), axis.title=element_text(size=18))
plt_irrelevant_nghbrs
Expand All @@ -1434,6 +1436,7 @@ res_tmp <- res %>% pivot_longer(cols=c("accs", "fixedaccs"),
values_to="accuracy")
plt_irrelevant_nghbrs <- ggplot(res_tmp) +
geom_point(mapping = aes(x=ks, y=accuracy, color=Type)) +
geom_line(mapping = aes(x=ks, y=accuracy, color=Type)) +
labs(x = "Number of Irrelevant Predictors", y = "Accuracy") +
scale_color_manual(labels= c("Tuned K", "K = 3"), values = c("darkorange", "steelblue")) +
Expand Down Expand Up @@ -1661,6 +1664,7 @@ where the elbow occurs, and whether adding a variable provides a meaningful incr
fwd_sel_accuracies_plot <- accuracies |>
ggplot(aes(x = size, y = accuracy)) +
geom_point() +
geom_line() +
labs(x = "Number of Predictors", y = "Estimated Accuracy") +
theme(text = element_text(size = 20), axis.title=element_text(size=20))
Expand Down
8 changes: 4 additions & 4 deletions source/inference.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ In general, the process of using a sample to make a conclusion about the
broader population from which it is taken is referred to as **statistical inference**.
\index{inference}\index{statistical inference|see{inference}}

```{r 11-population-vs-sample, echo = FALSE, message = FALSE, warning = FALSE, fig.align = "center", fig.cap = "Population versus sample.", out.width="100%"}
```{r 11-population-vs-sample, echo = FALSE, message = FALSE, warning = FALSE, fig.align = "center", fig.cap = "The process of using a sample from a broader population to obtain a point estimate of a population parameter. In this case, a sample of 10 individuals yielded 6 who own an iPhone, resulting in an estimated population proportion of 60% iPhone owners. The actual population proportion in this example illustration is 53.8%.", out.width="100%"}
knitr::include_graphics("img/inference/population_vs_sample.png")
```

Expand Down Expand Up @@ -443,7 +443,7 @@ sampling_distribution_40
In Figure \@ref(fig:11-example-means4), the sampling distribution of the mean
has one peak and is \index{sampling distribution!shape} bell-shaped. Most of the estimates are between
about \$`r round(quantile(sample_estimates$mean_price)[2], -1)` and
\$`r round(quantile(sample_estimates$mean_price)[4], -1)`; but there are
\$`r round(quantile(sample_estimates$mean_price)[4], -1)`; but there is
a good fraction of cases outside this range (i.e., where the point estimate was
not close to the population parameter). So it does indeed look like we were
quite lucky when we estimated the population mean with only
Expand Down Expand Up @@ -843,8 +843,8 @@ boot20000
tail(boot20000)
```

Let's take a look at histograms of the first six replicates of our bootstrap samples.
```{r 11-bootstrapping-six-bootstrap-samples, echo = TRUE, fig.pos = "H", out.extra="", message = FALSE, warning = FALSE, fig.align = "center", fig.cap = "Histograms of first six replicates of bootstrap samples."}
Let's take a look at the histograms of the first six replicates of our bootstrap samples.
```{r 11-bootstrapping-six-bootstrap-samples, echo = TRUE, fig.pos = "H", out.extra="", message = FALSE, warning = FALSE, fig.align = "center", fig.cap = "Histograms of the first six replicates of the bootstrap samples."}
six_bootstrap_samples <- boot20000 |>
filter(replicate <= 6)
Expand Down
20 changes: 11 additions & 9 deletions source/intro.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ tongues in Canada, and how many people speak each of them?*
Every good data analysis begins with a *question*&mdash;like the
above&mdash;that you aim to answer using data. As it turns out, there
are actually a number of different *types* of question regarding data:
descriptive, exploratory, inferential, predictive, causal, and mechanistic,
descriptive, exploratory, predictive, inferential, causal, and mechanistic,
all of which are defined in Table \@ref(tab:questions-table).
Carefully formulating a question as early as possible in your analysis&mdash;and
correctly identifying which type of question it is&mdash;will guide your overall approach to
Expand Down Expand Up @@ -174,10 +174,12 @@ Since we are using R for data analysis in this book, the first step for us is to
load the data into R. When we load tabular data into
R, it is represented as a *data frame* object\index{data frame!overview}. Figure
\@ref(fig:img-spreadsheet-vs-dataframe) shows that an R data frame is very similar
to a spreadsheet. We refer to the rows as \index{observation} **observations**; these are the things that we
collect the data on, e.g., voters, cities, etc. We refer to the columns as \index{variable}
**variables**; these are the characteristics of those observations, e.g., voters' political
affiliations, cities' populations, etc.
to a spreadsheet. We refer to the rows as \index{observation} **observations**;
these are the individual objects
for which we collect data. In Figure \@ref(fig:img-spreadsheet-vs-dataframe), the observations are
languages. We refer to the columns as **variables**; these are the characteristics of each
observation. In Figure \@ref(fig:img-spreadsheet-vs-dataframe), the variables are the the
language's category, its name, the number of mother tongue speakers, etc.

```{r img-spreadsheet-vs-dataframe, echo = FALSE, message = FALSE, warning = FALSE, fig.align = "center", fig.cap = "A spreadsheet versus a data frame in R.", out.width="100%", fig.retina = 2}
knitr::include_graphics("img/intro/spreadsheet_vs_dataframe.png")
Expand Down Expand Up @@ -235,7 +237,7 @@ library(tidyverse)
```

> **Note:** You may have noticed that we got some extra
> output from R saying `Attaching packages` and `Conflicts` below our code
> output from R regarding attached packages and conflicts below our code
> line. These are examples of *messages* in R, which give the user more
> information that might be handy to know. The `Attaching packages` message is
> natural when loading `tidyverse`, since `tidyverse` actually automatically
Expand Down Expand Up @@ -450,7 +452,7 @@ selected_lang <- select(aboriginal_lang, language, mother_tongue)
selected_lang
```

## Using `arrange` to order and `slice` to select rows by index number
## Using `arrange` to order and `slice` to select rows by index number {#arrangesliceintro}

We have used `filter` and `select` to obtain a table with only the Aboriginal
languages in the data set and their associated counts. However, we want to know
Expand Down Expand Up @@ -498,7 +500,7 @@ counts... But perhaps, seeing these numbers, we became curious about the
*percentage* of the population of Canada associated with each count. It is
common to come up with new data analysis questions in the process of answering
a first one&mdash;so fear not and explore! To answer this small
question-along-the-way, we need to divide each count in the `mother_tongue`
question along the way, we need to divide each count in the `mother_tongue`
column by the total Canadian population according to the 2016
census&mdash;i.e., 35,151,728&mdash;and multiply it by 100. We can perform
this computation using the `mutate` function. We pass the `ten_lang`
Expand All @@ -521,7 +523,7 @@ as a mother tongue by between 0.008% and 0.18% of the Canadian population.

## Exploring data with visualizations

We have now answered our initial question by generating the `ten_lang` table!
The `ten_lang` table we generated in Section \@ref(arrangesliceintro) answers our initial data analysis question.
Are we done? Well, not quite; tables are almost never the best way to present
the result of your analysis to your audience. Even the `ten_lang` table with
only two columns presents some difficulty: for example, you have to scrutinize
Expand Down
6 changes: 3 additions & 3 deletions source/jupyter.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ filled_circle <- function(){
if(is_latex_output()) {
"\\faCircle{}"
} else {
"\U25EF"
"\U2B24"
}
}
circle <- function(){
if(is_latex_output()) {
"\\faCircle[regular]{}"
} else {
"\U2B24"
"\U25EF"
}
}
```
Expand Down Expand Up @@ -68,7 +68,7 @@ By the end of the chapter, readers will be able to do the following:

## Jupyter

Jupyter is a web-based interactive development environment for creating, editing,
Jupyter [@kluyver2016jupyter] is a web-based interactive development environment for creating, editing,
and executing documents called Jupyter notebooks. Jupyter notebooks \index{Jupyter notebook} are
documents that contain a mix of computer code (and its output) and formattable
text. Given that they combine these two analysis artifacts in a single
Expand Down
4 changes: 2 additions & 2 deletions source/reading.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -352,8 +352,8 @@ Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055

To read this into R using the `read_delim` function, we specify the path
to the file as the first argument, provide
the tab character `"\t"` as the `delim` argument \index{read function!delim argument},
and set the `col_names` argument to `FALSE` to denote that there are no column names
the tab character `"\t"` as the `delim` argument,
and set \index{read function!delim argument} the `col_names` argument to `FALSE` to denote that there are no column names
provided in the data. Note that the `read_csv`, `read_tsv`, and `read_delim` functions
all have a `col_names` argument \index{read function!col\_names argument} with
the default value `TRUE`.
Expand Down
2 changes: 1 addition & 1 deletion source/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ @article{knncover
pages = {21--27}}

@article{penguinpaper,
title = {Ecological sexual dimorphism and environmental variability within a community of {A}ntarctic penguins (genus \emph{Pygoscelis})},
title = {Ecological sexual dimorphism and environmental variability within a community of {A}ntarctic penguins (genus Pygoscelis)},
year = {2014},
author = {Kristen Gorman and Tony Williams and William Fraser},
journal = {PLoS ONE},
Expand Down
2 changes: 1 addition & 1 deletion source/regression1.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ is that we are now predicting numerical variables instead of categorical variabl

> **Note:** You can usually tell whether a\index{categorical variable}\index{numerical variable} variable is numerical or
> categorical&mdash;and therefore whether you need to perform regression or
> classification&mdash;by taking two response variables X and Y from your data,
> classification&mdash;by taking the response variable for two observations X and Y from your data,
> and asking the question, "is response variable X *more* than response
> variable Y?" If the variable is categorical, the question will make no sense.
> (Is blue more than red? Is benign more than malignant?) If the variable is
Expand Down
Loading

0 comments on commit db6e782

Please sign in to comment.