-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathrandom_intercept.Rmd
553 lines (379 loc) · 25.4 KB
/
random_intercept.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
# Random Intercept Models {#RI}
<span style="color:blue">**Example Data**</span>
Radon is a radioactive gas that naturally occurs in soils around the U.S. As radon decays it releases other radioactive elements, which stick to, among other things, dust particles commonly found in homes. The EPA believes [radon exposure](https://www.epa.gov/radon) is one of the leading causes of cancer in the United States.
This example uses a dataset named `radon` from the `rstanarm` package. The dataset contains $N=919$ observations, each measurement taken within a home that is located within one of the $J=85$ sampled counties in Minnesota. The first six rows of the dataframe show us that the county Aitkin has variable levels of $log(radon)$. Our goal is to build a model to predict $log(radon)$.
```{r, echo=-1}
library(sjPlot)
data(radon, package="rstanarm")
head(radon)
```
## Pooling {#pooling}
To highlight the benefits of random intercepts models we will compare three linear regression models:
* complete pooling
* no pooling
* partial pooling (the random intercept model)
**Complete Pooling**
The complete pooling model pools all counties together to give one single estimate of the $log(radon)$ level.
**No Pooling**
No pooling refers to the fact that no information is shared among the counties. Each county is independent of the next.
**Partial Pooling**
The partial pooling model, partially shares information among the counties.
Each county should get a _unique intercept_ such that the collection of county intercepts are randomly sampled from a normal distribution with mean $0$ and variance $\sigma^2_{\alpha}$.
Because all county intercepts are randomly sampled from the same theoretical population, $N(0, \sigma^2_{\alpha})$, information is shared among the counties. This sharing of information is generally referred to as **shrinkage**, and should be thought of as a means to reduce variation in estimates among the counties. When a county has little information to offer, it's estimated intercept will be shrunk towards to overall mean of all counties.
```{r, echo=FALSE}
library(lme4)
fit_nopool <- lm(log_radon~-1+county, data=radon)
fitted_nopool <- dplyr::bind_cols(radon,
data.frame(.fitted=predict(fit_nopool)))
fit_partpool <- lmer(log_radon ~ (1 |county), data=radon)
fitted_partpool <- dplyr::bind_cols(radon,
data.frame(.fitted=predict(fit_partpool),
.fixed=lme4::fixef(fit_partpool)))
```
The plot below displays the overall mean as the complete pooling estimate (solid, horizontal line), the no pooling and partial pooling estimates for 8 randomly selected counties contained in the radon data. The amount of shrinkage from the partial pooling fit is determined by a data dependent compromise between the county level sample size, the variation among the counties, and the variation within the counties.
```{r, echo=FALSE, fig.width=8, fig.height=5, fig.align="center"}
county_idx <- sample(unique(radon$county), 8, replace=FALSE)
fitted_nopool %>%
filter(county %in% county_idx) %>%
ggplot(aes(x=county, y=.fitted, colour="not pooled")) +
geom_jitter() +
geom_point(data=filter(fitted_partpool, county %in% county_idx),
aes(y=.fitted, colour="partially pooled")) +
geom_hline(data=fitted_partpool, aes(yintercept=mean(.fixed), colour="completely pooled")) +
labs(y="Estimated county means", x="County") +
theme(axis.text.x=element_text(angle=35, hjust=1)) +
guides(colour=guide_legend(title="Model"))
```
Generally, we can see that counties with smaller sample sizes are shrunk more towards the overall mean, while counties with larger sample sizes are shrunk less.
```{block2, type="rmdcaution"}
The fitted values corresponding to different observations within each county of the no-pooling model are jittered to help the eye determine approximate sample size within each county.
Estimates of variation within each county should not be determined from this arbitrary jittering of points.
```
## Mathematical Models {#mathri}
The three models considered set $y_n=log(radon)$, and $x_n$ records floor (0=basement, 1=first floor) for homes $n=1, \ldots, N$.
### Complete Pooling
The complete pooling model pools all counties together to give them one single estimate of the $log(radon)$ level, $\hat{\alpha}$.
* The error term $\epsilon_n$ may represent variation due to measurement error, within-house variation, and/or within-county variation.
* Fans of the random intercept model think that $\epsilon_n$, here, captures too many sources of error into one term, and think that this is a fault of the completely pooled model.
\begin{equation*}
\begin{split}
y_n = \alpha & + \epsilon_n \\
& \epsilon_n \sim N(0, \sigma_{\epsilon}^{2})
\end{split}
\end{equation*}
### No Pooling
* The no pooling model gives each county an independent estimate of $log(radon$), $\hat{\alpha}_{j[n]}$.
* Read the subscript $j[n]$ as home $n$ is nested within county $j$. Hence, all homes in each county get their own independent estimate of $log(radon)$.
* This is equivalent to the fixed effects model
* Here again, one might argue that the error term captures too much noise.
\begin{equation*}
\begin{split}
y_n = \alpha_{j[n]} & + \epsilon_n \\
\epsilon_n & \sim N(0, \sigma_{\epsilon}^{2})
\end{split}
\end{equation*}
### Partial Pooling (RI)
* The random intercept model, better known as the partial pooling model, gives each county an intercept term $\alpha_j[n]$ that varies according to its own error term, $\sigma_{\alpha}^2$.
* This error term measures within-county variation
- Separating measurement error ($\sigma_{\epsilon}^{2}$) from county level error ($\sigma_{\alpha}^{2}$) .
* This multi-level modeling shares information among the counties to the effect that the estimates $\alpha_{j[n]}$ are a compromise between the completely pooled and not pooled estimates.
* When a county has a relatively smaller sample size and/or the variance $\sigma^{2}_{\epsilon}$ is larger than the variance $\sigma^2_{\alpha}$, estimates are shrunk more from the not pooled estimates towards to completely pooled estimate.
\begin{equation*}
\begin{split}
y_n = \alpha_{j[n]} & + \epsilon_n \\
\epsilon_n & \sim N(0, \sigma_{\epsilon}^{2}) \\
\alpha_j[n] & \sim N(\mu_{\alpha}, \sigma_{\alpha}^2)
\end{split}
\end{equation*}
## Components of Variance
Statistics can be thought of as the study of uncertainty, and variance is a measure of uncertainty (and information). So yet again we see that we're partitioning the variance. Recall that
* Measurement error: $\sigma^{2}_{\epsilon}$
* County level error: $\sigma^{2}_{\alpha}$
The **intraclass correlation** (ICC, $\rho$) is interpreted as
* the proportion of total variance that is explained by the clusters.
* the expected correlation between two individuals who are drawn from the same cluster.
$$
\rho = \frac{\sigma^{2}_{\alpha}}{\sigma^{2}_{\alpha} + \sigma^{2}_{\epsilon}}
$$
* When $\rho$ is large, a lot of the variance is at the macro level
- units within each group are very similar
* If $\rho$ is small enough, one may ask if fitting a multi-level model is worth the complexity.
* No hard and fast rule to say "is it large enough?"
- rules of thumb include
- under 10% (0.1) then a single level analysis may still be appropriate,
- over 10\% (0.1) then a multilevel model can be justified.
## Fitting models in R {#fitri}
**Complete Pooling**
The complete pooling model is fit with the function `lm`, and is only modeled by `1` and no covariates. This is the simple mean model, and is equivelant to estimating the mean.
```{r}
fit_completepool <- lm(log_radon ~ 1, data=radon)
fit_completepool
mean(radon$log_radon)
```
**No Pooling**
The no pooling model is also fit with the function `lm`, but gives each county a unique intercept in the model.
```{r, results='asis', echo=1:2}
fit_nopool <- lm(log_radon ~ -1 + county, data=radon)
fit_nopool.withint <- lm(log_radon ~ county, data=radon)
stargazer::stargazer(fit_nopool, fit_nopool.withint,
type="html", single.row=TRUE, omit.stat="all", p=NULL,
intercept.top=TRUE, intercept.bottom=FALSE,
keep=c("Constant", "countyAITKIN", "countyANOKA", "countyBECKER"))
```
* The first model (`fit_nopool`) is coded as `lm(log_radon ~ -1 + county, data=radon)`, and so does not have the global intercept (that's what the `-1` does). Each $\beta$ coefficient is the estimate of the mean `log_radon` for that county.
* The second model (`fit_nopool.withint`) is coded as `lm(log_radon ~ county, data=radon)` and is what we are typically used to fitting.
- Each estimate is the difference in log(radon) for that county _compared to a reference county_.
- Because county is alphabetical, the reference group is AITKIN.
- Aitkin's mean level of log(radon) shows up as the intercept or _Constant_ term.
* For display purposes only, only the first 3 county estimates are being shown.
**Partial Pooling**
* The partial pooling model is fit with the function `lmer()`, which is part of the **[`lme4`](https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf)** package.
* The extra notation around the input variable `(1|county)` dictates that each county should get its own unique intercept $\alpha_{j[n]}$.
```{r}
fit_partpool <- lmer(log_radon ~ (1 |county), data=radon)
```
The fixed effects portion of the model output of `lmer` is similar to output from `lm`, except no p-values are displayed. The fact that no p-values are displayed is a much discussed topic. The author of the library `lme4`, Douglas Bates, believes that there is no "obviously correct" solution to calculating p-values for models with randomly varying intercepts (or slopes); see **[here](https://stat.ethz.ch/pipermail/r-help/2006-May/094765.html)** for a general discussion.
```{r}
summary(fit_partpool)
```
* The random effects portion of the `lmer` output provides a point estimate of the variance of component $\sigma^2_{\alpha} = 0.09$ and the model's residual variance, $\sigma_\epsilon = 0.57$.
* The fixed effect here is interpreted in the same way that we would in a normal fixed effects mean model, as the global predicted value of the outcome of `log_radon`.
* The random intercepts aren't automatically shown in this output. We can visualize these using a forestplot. We use the `plot_model()` function from the `sjPlot` package, on the `fit_partpool` model, we want to see the random effects (`type="re"`), and we want to sort on the name of the random variable, here it's `"(Intercept)"`.
```{r, fig.height=12}
sjPlot::plot_model(fit_partpool, type="re", sort.est = "(Intercept)", y.offset = .4)
```
Notice that these effects are centered around 0. Refering back \@ref(mathri), the intercept $\beta_{0j}$ was modeled equal to some average intercept across all groups $\gamma_{00}$, plus some difference. What is plotted above is listed in a table below, showing that if you add that random effect to the fixed effect of the intercept, you get the value of the random intercept for each county.
```{r}
showri <- data.frame(Random_Effect = unlist(ranef(fit_partpool)),
Fixed_Intercept = fixef(fit_partpool),
RandomIntercept = unlist(ranef(fit_partpool))+fixef(fit_partpool))
rownames(showri) <- rownames(coef(fit_partpool)$county)
kable(head(showri))
```
```{r,eval=FALSE,echo=FALSE}
# confirm this behavior.
a <- unlist(ranef(fit_partpool))
b <- unlist(coef(fit_partpool)$county)
diff <- data.frame(re=a, fe =fixef(fit_partpool), coef=b, tot = a+fe)
rownames(diff) <- rownames(coef(fit_partpool)$county)
head(diff)
```
### Comparison of estimates
* By allowing individuals within counties to be correlated, and at the same time let counties be correlated, we allow for some information to be shared across counties.
* Thus we come back to that idea of shrinkage. Below is a numeric table version of the plot in Section \@ref(pool).
```{r}
cmpr.est <- data.frame(Mean_Model = coef(fit_completepool),
Random_Intercept = unlist(ranef(fit_partpool))+fixef(fit_partpool),
Fixed_Effects = coef(fit_nopool))
rownames(cmpr.est) <- rownames(coef(fit_partpool)$county)
kable(head(cmpr.est))
```
## Estimation Methods
* Similar to logistic regression, estimates from multi-level models typically aren't estimated directly using maximum likelihood (ML) methods.
* Iterative methods like **Restricted (residual) Maximum Likelihood (REML)** are used to get approximations.
* REML is typically the default estimation method for most packages.
Details of REML are beyond the scope of this class, but knowing the estimation method is important for two reasons
1. Some type of testing procedures that use the likelihood ratio may not be valid.
- Comparing models with different fixed effects using a likelihood ratio test is not valid. (Must use Wald Test instead)
- Can still use AIC/BIC as guidance (not as formal tests)
2. Iterative procedures are procedures that perform estimation steps over and over until the change in estimates from one step to the next is smaller than some tolerance.
- Sometimes this convergence to an answer never happens.
- You will get some error message about the algorithm not converging.
- The more complex the model, the higher chance this can happen
- scaling, centering, and avoiding collinearity can alleviate these problems with convergence.
You can change the fitting algorithm to use the Log Likelihood anyhow, it may be slightly slower but for simple models the estimates are going to be very close to the REML estimate. Below is a table showing the estimates for the random intercepts,
```{r, echo=FALSE}
fit_partpool_MLE <- lmer(log_radon ~ (1 |county), data=radon, REML=FALSE)
RIs <- data.frame(coef(fit_partpool)$county,coef(fit_partpool_MLE)$county)
colnames(RIs) <- c("REML", "MLE")
library(kableExtra)
kable(head(RIs), "html") %>% kable_styling(full_width=FALSE)
```
and the same estimates for the variance terms.
```{r}
VarCorr(fit_partpool)
VarCorr(fit_partpool_MLE)
```
So does it matter? Yes and no. In general you want to fit the models using REML, but if you really want to use a Likelihood Ratio **test** to compare models then you need to fit the models using ML.
## Including Covariates
A similar sort of shrinkage effect is seen with covariates included in the model.
Consider the covariate $floor$, which takes on the value $1$ when the radon measurement was read within the first floor of the house and $0$ when the measurement was taken in the basement. In this case, county means are shrunk towards the mean of the response, $log(radon)$, within each level of the covariate.
```{r, echo=FALSE, fig.width=8, fig.height=5, fig.align="center"}
radon$floor <- factor(radon$floor, labels=c("basement", "first floor"))
fit_nopool <- lm(log_radon ~ floor + county, data=radon)
fitted_nopool <- bind_cols(radon,data.frame(.fitted=predict(fit_nopool)))
fit_partpool <- lmer(log_radon ~ floor + (1 |county), data=radon)
fitted_partpool <- bind_cols(radon, data.frame(.fitted=predict(fit_partpool),
.fixed=predict(fit_partpool, re.form=NA)))
county_idx <- c("BLUEEARTH", "DAKOTA", "FARIBAULT", "HENNEPIN",
"LACQUIPARLE", "MARSHALL", "PINE", "WASECA")
fitted_nopool %>%
filter(county %in% county_idx) %>%
ggplot(aes(x=county, y=.fitted, colour="not pooled")) +
geom_jitter() +
geom_point(data=filter(fitted_partpool, county %in% county_idx), aes(y=.fitted, colour="partially pooled")) +
geom_hline(data=filter(fitted_partpool, county %in% county_idx), aes(yintercept=.fixed, colour="completely pooled")) +
facet_grid(.~floor) +
labs(y="Estimated county means", x="County") +
theme(axis.text.x=element_text(angle=35, hjust=1)) +
guides(colour=guide_legend(title="Model"))
```
Covariates are fit using standard `+` notation outside the random effects specification, i.e. `(1|county)`.
```{r}
ri.with.x <- lmer(log_radon ~ floor + (1 |county), data=radon)
tab_model(ri.with.x, show.r2=FALSE)
```
Note that in this table format, $\tau_{00} = \sigma^{2}_{\alpha}$ and $\sigma^{2} = \sigma^{2}_{\epsilon}$. The estimated random effects can also be easily visualized using functions from the [sjPlot](http://www.strengejacke.de/sjPlot/) package.
```{r, fig.height=12}
plot_model(ri.with.x, type="re", sort.est = "(Intercept)", y.offset = .4)
```
Function enhancements --
* Display the fixed effects by changing `type="est"`.
* Plot the slope of the fixed effect for each level of the random effect `sjp.lmer(ri.with.x, type="ri.slope")` -- this is being depreciated in the future but works for now. Eventually I'll figure out how to get this plot out of `plot_model()`.
## More Random Effects
What if you think the slope along some $x$ should vary (such as over time)?
> This section has not been built yet. Reference this set of notes in the meantime: https://m-clark.github.io/mixed-models-with-R/random_slopes.html
## Centering terms
* Sometimes it might be better to measure the effect of a specific level relative to the average within cluster, rather than overall average.
* The "frog pond" effect
- A student with an average IQ may be more confident and excel in a group of students with less than average IQ
- But they may be discouraged and not perform to their potential in a group of students with higher than average IQ.
* If the effect of a specific level of a factor is dependent on where the level is in reference to _other cluster members_, more so than where the level is in reference to _all other participants_, the model should be adjusted for as follows:
* Instead of using the actual value in the regression model you would...
- calculate the cluster specific average
- calculate the difference between individual and specific cluster average
- both cluster average (macro) and difference (micro) are included in the model.
### A generic `dplyr` approach to centering.
```
group.means <- data %>% group_by(cluster) %>% summarise(c.ave=mean(variable))
newdata <- data %>% left_join(group.means) %>% mutate(diff = variable - c.ave)
```
1. Create a new data set that I call `group.means` that
- takes the original `data` set and then (`%>%`)...
- groups it by the clustering variable so that all subsequent actions are done on each group
- makes a new variable that I call `c.ave` that is the average of the `variable` of interest
2. I then take the original `data` set, and then
- merge onto `data`, this `group.means` data set that only contains the clustering variable, and the cluster average variable `c.ave`.
- I also toss in a `mutate` to create a new variable that is the `diff`erence between the `variable` of interest and the group averages.
- and assign all of this to a `newdata` set
## Specifying Correlation Structures
* **Independence:** In standard linear models, the assumption on the residuals $\epsilon_{i} \sim \mathcal{N}(0, \sigma_{\epsilon}^{2})$ means that
* The variance of each observation is $\sigma_{\epsilon}^{2}$
* The covariance between two different observations $0$
Consider $n=4$ observations, $y_{1}, \ldots , y_{4}$. Visually the covariance matrix between these four observations would look like this:
$$
\begin{array}{c|cccc}
& y_{1} & y_{2} & y_{3} & y_{4}\\
\hline
y_{1} & \sigma_{\epsilon}^{2} & 0 & 0 & 0\\
y_{2} & 0 & \sigma_{\epsilon}^{2} & 0 & 0\\
y_{3} & 0 & 0 & \sigma_{\epsilon}^{2} & 0\\
y_{4} & 0& 0 & 0 & \sigma_{\epsilon}^{2}
\end{array}
$$
We can also write the covariance matrix as $\sigma_{\epsilon}^{2}$ times the correlation matrix.
$$
\begin{bmatrix}
\sigma_{\epsilon}^{2} & 0 & 0 & 0\\
0 & \sigma_{\epsilon}^{2} & 0 & 0\\
0 & 0 & \sigma_{\epsilon}^{2} & 0\\
0& 0 & 0 & \sigma_{\epsilon}^{2}
\end{bmatrix}
=
\sigma_{\epsilon}^2
\begin{bmatrix}
1 & 0 & 0 & 0 \\
& 1 & 0 & 0 \\
& & 1 & 0 \\
& & & 1
\end{bmatrix}
$$
* **Compound Symmetry** or **Exchangeable:** The simplest covariance structure that includes correlated errors is compound symmetry (CS). Here we see correlated errors between individuals, and note that these correlations are presumed to be the same for each pair of responses, namely $\rho$.
$$
\sigma_{\epsilon}^{2}
\begin{bmatrix}
1 & \rho & \rho & \rho \\
& 1 & \rho & \rho \\
& & 1 & \rho \\
& & & 1
\end{bmatrix}
$$
* **Autoregressive:** Imagine that $y_{1}, \ldots , y_{4}$ were 4 different time points on the same person. The autoregressive (Lag 1) structure considers correlations to be highest for time adjacent times, and a systematically decreasing correlation with increasing distance between time points. This structure is only applicable for evenly spaced time intervals for the repeated measure.
$$
\sigma_{\epsilon}^{2}
\begin{bmatrix}
1 & \rho & \rho^{2} & \rho^{3} \\
& 1 & \rho & \rho^{2} \\
& & 1 & \rho \\
& & & 1
\end{bmatrix}
$$
* **Unstructured:** The Unstructured covariance structure (UN) is the most complex because it is estimating unique correlations for each pair of observations. It is not uncommon to find out that you are not able to use this structure simply because there are too many parameters to estimate.
$$
\begin{bmatrix}
\sigma_{1}^{2} & \rho_{12} & \rho_{13} & \rho_{14} \\
& \sigma_{2}^{2} & \rho_{23} & \rho_{24} \\
& & \sigma_{3}^{2} & \rho_{34} \\
& & & \sigma_{4}^{2}
\end{bmatrix}
$$
* Random Intercept Model
Let $y_{1}$ and $y_{2}$ be from group 1, and $y_{3}$ and $y_{4}$ be from group 2.
* error terms between groups are uncorrelated (groups are independent)
* two different observations from the same group have covariance $\sigma_{\alpha}^{2}$
* individuals now have the error associated with their own observation but also due to the group
$\sigma_{\epsilon}^{2} + \sigma_{\alpha}^{2}$
$$
\left[
\begin{array}{cc|cc}
\sigma_{\epsilon}^{2} + \sigma_{\alpha}^{2} & \sigma_{\alpha}^{2} & 0 & 0\\
\sigma_{\alpha}^{2} & \sigma_{\epsilon}^{2} + \sigma_{\alpha}^{2} & 0 & 0\\
\hline
0 & 0 & \sigma_{\epsilon}^{2} + \sigma_{\alpha}^{2} & \sigma_{\alpha}^{2}\\
0 & 0 & \sigma_{\alpha}^{2} & \sigma_{\epsilon}^{2} + \sigma_{\alpha}^{2}
\end{array}
\right]
$$
### Changing covariance structures in R
> This is very hard to do as the model becomes more complex.
> These types of models is where Bayesian statistics has a much easier time fitting models.
> this section has been commented out of the notes until figured out in a cleaner manner.
<!---
* Very hard (read - don't bother) to do this using `lmer()` from package `lme4`
* Can do this using `lme()` from package `nlme`. Syntax is similar.
* The standard classes of correlation structures available in the `nlme` package can be found in [[this help file]](https://stat.ethz.ch/R-manual/R-devel/library/nlme/html/corClasses.html)
```{r}
library(nlme)
model_lme_cs <- lme(log_radon ~ floor,
random = ~ 1 | county,
cor=corCompSymm(value=0.159,form=~1|county),data = radon)
```
Using a different covariance structure can have a large effect on the results.
* `lmer` using Identity: $\sigma^{2}_{\alpha} = 0.10, \sigma^{2}_{\epsilon} = 0.53$
* `nlme` using Identity: $\sigma^{2}_{\alpha} = 0.32^2 = 0.10, \sigma^{2}_{\epsilon} = 0.73^2 = 0.53$
* `nlme` using CS: $\sigma^{2}_{\alpha} = 0.14^2 = 0.02, \sigma^{2}_{\epsilon} = 0.78^2 = 0.61$
```{block2, type="rmdcaution"}
Mis-specifying the covariance structure can also have a large effect on the results.
```
#### Autoregressive
The AR1 structure specifies that the correlations between the repeated measurements of each subject decrease with the time lag, i.e., the distance in time between the measurements. The data is assumed to be in sort order, where $y_{it}$ and $y_{i(t+1)}$ are in sequential rows. To sort by time within id in dplyr looks like: `arrange(id, time)`.
* **Equally spaced items**
`lme(..., correlation = corAR1())` which is equivelant to `lme(..., correlation = corAR1(form = ~ 1 | id))`
* **Not equally spaced items**
`lme(..., correlation = corAR1(form = ~ time | id))`, the `time` variable is used to determine how far apart the measurements are (the time lag).
* **Discrete vs continuous time**
corAR1() works with discrete time. There is also corCAR1() that works with continuous time.
Ref: https://stats.stackexchange.com/questions/367957/autoregression-in-nlme-undestanding-how-to-specify-corar1
--->
## Additional References
* Random effects ANOVA in SAS and R http://stla.github.io/stlapblog/posts/AV1R_SASandR.html
* ICCs in mixed models https://www.theanalysisfactor.com/the-intraclass-correlation-coefficient-in-mixed-models/
* Very nice introduction to mixed models in R https://m-clark.github.io/mixed-models-with-R/introduction.html
* Interesting blog by [Tristan Mahr](https://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/) about pooling and shrinkage.
* Derivation of the covariance structures http://www.bristol.ac.uk/cmm/learning/videos/correlation.html#matrix2
* Mixed models with R - Michael Clark https://m-clark.github.io/mixed-models-with-R/
### Lecture notes from other classes found on the interwebs
* [BIOL 202: Ecological Statistics from Stanford](https://fukamilab.github.io/BIO202/04-A-mixed-models.html) This is a graduate level class.
### Package Vignettes
* [lme4](https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf) Functions to fit HLMs
* [sjPlot](http://strengejacke.de/sjPlot/sjt.lmer/) **Really** nice way of printing output as tables (and plots).