From 78693e5ba5bc61cb6d05e39dacd6db31e6dd89bd Mon Sep 17 00:00:00 2001
From: Brad Duthie
diff --git a/Chapter_1.html b/Chapter_1.html
index dc6c9b1f..086bc955 100644
--- a/Chapter_1.html
+++ b/Chapter_1.html
@@ -23,7 +23,7 @@
-
+
@@ -321,14 +321,14 @@
\[s = \frac{0.0902404}{\sqrt{10}} = 0.0285365.\]
-The estimate of the standard error from calculating the standard deviation of the sample means is therefore 0.0294695, and the estimate from just using the standard error formula and data from only Sample 1 is 0.0285365. +
\[s = \frac{0.0824891}{\sqrt{10}} = 0.0260853.\]
+The estimate of the standard error from calculating the standard deviation of the sample means is therefore 0.0265613, and the estimate from just using the standard error formula and data from only Sample 1 is 0.0260853. These are reasonably close, and they would be even closer if we had either a larger sample size in each sample (i.e., higher \(N\)) or a larger number of samples.
When you do this, the Statistics option in jamovi should look like it does in Figure 14.3.
-Note that a binomial distribution does not need to involve a fair coin with equal probability of success and failure. We can consider again the first example in Section 15.2, in which 1 in 40 people in an area are testing positive for COVID-19, then ask what the probability is that 0–6 people in a small shop would test positive (Figure 14.6).
-The distribution of sample means shown in Figure 16.1B is not perfectly normal. We can try again with an even bigger sample size of \(N = 1000\), this time with a Poisson distribution where \(\lambda = 1\) in Figure 15.7. Figure 16.2 shows this result, with the original Poisson distribution shown in Figure 16.2A, and the corresponding distribution built from 1000 sample means shown in Figure 16.2B.
-We can try the same approach with the continuous uniform distribution shown in Figure 15.8. This time, we will use an even larger sample size of \(N = 10000\) to get our 1000 sample means. The simulated result is shown in Figure 16.3.
-hist
function plots a histogram of the first variable.
To run the code, find the green triangle in the upper right (Figure 17.6).
-Our t-statistic is therefore 2.496623 (note that a t-statistic can also be negative; this would just mean that our sample mean is less than \(\mu_{0}\), instead of greater than \(\mu_{0}\), but nothing about the t-test changes if this is the case). We can see where this value falls on the t-distribution with 9 degrees of freedom in Figure 22.1.
-\[t_{\bar{y}_{1} - \bar{y}_{2}} = \frac{\bar{y}_{1} - \bar{y}_{2}}{SE(\bar{y})} = \frac{66.58 - 61.3}{3.915144} = 1.348609.\]
As with the one-sample t-test, we can identify the position of \(t_{\bar{y}_{1} - \bar{y}_{2}}\) on the t-distribution (Figure 22.2).
-\[t_{\bar{y}} = \frac{-0.88 - 0}{0.9760237} = -0.9016175.\]
Again, we can find the location of our t-statistic \(t_{\bar{y}} = -0.9016175\) on the t-distribution (Figure 22.3).
-When running an ANOVA in a statistical program, output includes (at least) the calculated F-statistic, degrees of freedom, and the p-value. Figure 24.3 shows the one-way ANOVA output of the test of fig wasp wing lengths in jamovi.
-