-
Notifications
You must be signed in to change notification settings - Fork 3
/
Task1_Update.Rmd
618 lines (516 loc) · 21.5 KB
/
Task1_Update.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
---
title: "Task 1 Report"
author: "Katie Willi and Caitlin Mothes"
date: "`r Sys.Date()`"
output:
html_document:
toc: true
toc_float: true
theme: paper
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE,
warning = FALSE,
error = FALSE,
message = FALSE)
#cache = TRUE)
source("setup.R")
```
This report aims to update NPS on our progress related to **Task 1 in our Statement of Work:**
*Compile available data and information describing the hydrology, climate, and water supplies of priority NPS-identified parks (e.g., DEVA, JOTR, and MOJA).*
- *Examples: Hydrologic data may include stream and spring flow data, groundwater level data, and water quality data associated with specific sources of NPS water supplies or suitable analogs. This may also include information describing contributing basins (e.g., catchment areas, groundwater recharge areas, surface water/groundwater interaction, and output from water balance models). Climate data may include historical precipitation and temperature data and locations and monitored parameters of weather stations and SNOTEL sites. Water supply information may include types of supplies (i.e., surface water or groundwater), point-of-diversion locations, diversion amounts, NPS-observed supply challenges, etc.*
### Technical Approach
We have developed a suite of self-contained R functions (i.e., no local data downloads necessary) that pull relevant data sets for any specified park unit(s) in the contiguous US (CONUS) (see Table 1). This workflow allows for easy reproducibility across all parks and quickly pulls data directly from R, saving local storage space and reducing the need for complicated file management if the option to save the data locally is not selected. Moreover, all of our code and project tasks/timelines can be found in the ROSSyndicate GitHub [nps_watershed_vulnerability repository](https://github.com/rossyndicate/nps_water_vulnerability).
```{r}
readxl::read_xlsx("data/conus_functions.xlsx") %>%
mutate(Source = paste("<a href=\"", Link, "\">", Data_Source,"</a>")) %>%
dplyr::select(-c("Data_Source", "Link")) %>%
DT::datatable(filter = "none",
caption = 'Table 1: List of current functions developed to mine relevant CONUS-wide hydrologic and climatic data. Asterisks indicate data that is not publicly available yet.',
rownames = FALSE,
fillContainer = T,
escape = FALSE,
options = list(dom = 't',
pageLength = nrow(.),
scrollY = '300px'))
```
<br>
Because the end-goal of this project is to assess the water supply and climatic challenges of many park units, our primary focus has been to mine data sets that are available across CONUS so the bulk of our analyses can be reproducible and comparable across our assessments. Nationally available field sampling/instrumentation data sets that we are utilizing include the National Water Quality Portal, the National Water Information System, and NPS Aquarius. Modeled/interpolated data sets include gridded water balance models (Tercek et al., 2021), freshwater use projections (Warziniack et al., 2022), and groundwater recharge estimates (Wollock, 2003). Additional national-scale data sets include the National Land Cover Dataset's Land Use Change Index, infrastructure layers from the National Park Service, and the National Inventory of Dams.
Though our focus is centered around utilizing nationally-available data sets, we would lose key information by not exploring more localized sources of data, including state-run water monitoring programs and state water rights data. Therefore, we have also pulled relevant state-specific data sets associated with our test parks: Death Valley, Mojave Desert, and Joshua Tree National Parks (see Table 2).
```{r}
readxl::read_xlsx("data/park_specific_functions.xlsx") %>%
mutate(Source = paste("<a href=\"", Link, "\">", Data_Source,"</a>")) %>%
dplyr::select(-c("Data_Source", "Link")) %>%
DT::datatable(filter = "none",
caption = "Table 2: List of current functions developed to mine park-/state-specific hydrologic data.",
rownames = FALSE,
fillContainer = TRUE,
escape = FALSE,
options = list(dom = 't',
scrollY = '300px'))
```
<br>
Data can be collected using two separate spatial extents (Figure 1):
1. Data within a specified distance of the park boundaries
2. Data contained within the parks' surface water contributing area.
Park unit boundaries are downloaded from the NPS-IRMA DataStore. For the purpose of this report we have been selecting data that is within 33.3 kilometers of each park unit. For surface water contributing areas, we use the R package `nhdplusTools` to select NHDPlus Version 2 features that are then used to delineate watersheds for all surface water features within each park unit.
![**Figure 1**: Maps showing the two different approaches used for pulling data. Here, we are downloading water right point-of-diversion data associated with Death Valley National Park's park boundary (left) and its surface water contributing area (right).](data/1_report/DEVA_example.png){fig-align="center"}
<br>
#### The following maps and tables represent just some of the data that we have currently pulled for each focus park.
```{r, error = FALSE, warning = FALSE, message = FALSE, include=FALSE}
# import park units
park <- c("BRCA","ZION")
park_boundary <- getParkBoundary(park = c("BRCA","ZION"))
park_buffer <- park_boundary %>%
sf::st_buffer(., dist = 0.3) # dist in dd; approximately 33.3 kilometers
# delineate surface wawter contributing areas per park
watershed <- getWatersheds(aoi = park_boundary, save = TRUE, path = 'data/1_report/watersheds_utah.RDS')
watershed <- readRDS('data/1_report/watershed_utah.RDS') %>%
# dissolve for data pulling:
group_by(REGION) %>%
summarize()
watershed_split <- readRDS('data/1_report/watershed.RDS')
# put all of these areas of interest together for a single data pull
aoi_split <- bind_rows(watershed, park_buffer)
aoi <- aoi_split %>%
summarize()
```
<br>
<br>
# Water Rights Data
### Point-of-diversion Locations
We pulled water rights point-of-diversion locations for the states of California and Nevada using the `getPODCalifornia()` and `getPODNevada()` functions.
```{r}
# Download water right PODs from Nevada database:
# nv_water_rights <- getPODNevada(aoi = aoi, save = T, path = 'data/1_report/nv_water_rights.RDS')
nv_water_rights <- readRDS('data/1_report/nv_water_rights.RDS') %>%
st_join(select(aoi_split, UNIT_CODE), left = TRUE) %>%
distinct(., .keep_all=TRUE)
# Download water right PODs from California database:
# ca_water_rights <- getPODNevada(aoi = aoi, save = T, path = 'data/1_report/ca_water_rights.RDS')
ca_water_rights <- readRDS('data/1_report/ca_water_rights.RDS') %>%
st_join(select(aoi_split, UNIT_CODE), left = TRUE) %>%
distinct(., .keep_all=TRUE)
```
```{r}
mapview(nv_water_rights,
col.regions = c("#56B4E9","#F0E442"),
zcol = 'PARK_RIGHT',
alpha.regions = 1,
alpha = 0.75,
cex = 4,
layer.name = "PODs") +
mapview(ca_water_rights,
col.regions = c("#56B4E9","#F0E442"),
zcol = 'PARK_RIGHT',
alpha.regions = 1,
alpha = 0.75,
cex = 4,
legend = FALSE) +
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="#56B4E9",
alpha.regions = 0.33,
lwd = 0,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
```
<br>
```{r}
ca_water_rights %>%
sf::st_drop_geometry() %>%
arrange((UNIT_CODE)) %>%
# kableExtra::kable(format='html', caption = "California Water Rights") %>%
# kableExtra::kable_styling(position = 'center') %>%
# kableExtra::scroll_box(width = '900px', height = '500px')
DT::datatable(filter = "none",
caption = 'California Water Rights',
rownames = FALSE,
fillContainer = T,
options = list(
scrollY = '300px')
)
```
<br>
```{r}
nv_water_rights %>%
sf::st_drop_geometry() %>%
arrange((UNIT_CODE)) %>%
DT::datatable(filter = "none",
caption = 'Nevada Water Rights',
rownames = FALSE,
fillContainer = T,
options = list(
scrollY = '300px')
)
# kableExtra::kable(format='html', caption = "Nevada Water Rights") %>%
# kableExtra::kable_styling(position='center') %>%
# kableExtra::scroll_box(width = '900px', height = '500px')
rm(ca_water_rights,nv_water_rights)
```
<br>
<br>
### Water rights dockets
We can retrieve metadata for park water rights dockets in NPS-IRMA DataStore with `inventoryDataStore()`, and download them with `getDataStore()`.
```{r}
dockets <- inventoryDataStore(park = park) %>%
dplyr::filter(referenceType == "Docket") %>%
dplyr::select(-citation)
dockets %>%
#sf::st_drop_geometry() %>%
DT::datatable(filter = "none",
caption = 'Water Rights Dockets',
rownames = FALSE,
fillContainer = T,
options = list(autoWidth = TRUE,
columnDefs = list(list(width = '250px', targets = c(8,9))),
scrollY = '300px')
)
# kableExtra::kable(format='html', caption = "Water Rights Dockets") %>%
# kableExtra::kable_styling(position='center') %>%
# kableExtra::scroll_box(width = '900px', height = '500px')
rm(dockets)
```
<br>
<br>
# Water and Weather Data
### [USGS flow data](https://waterdata.usgs.gov/nwis)
We are able to inventory NWIS locations, as well as the type of data being collected there, within an area of interest using `inventoryNWIS()`. We can download the raw sensor data using `getNWIS()`.
```{r}
# pull metadata on USGS locations with flow data
# nwis_all <- inventoryNWIS(aoi = aoi, save = T, path = 'data/1_report/nwis_all.RDS')
nwis_all <- readRDS('data/1_report/nwis_all.RDS') %>%
mutate(site_type = ifelse(is.na(site_type), "Other", site_type)) %>%
filter(flow_data == "Flow") %>%
st_join(select(aoi_split, UNIT_CODE), left = TRUE) %>%
distinct(., .keep_all=TRUE) %>%
select(UNIT_CODE, site_no, site_name, site_type, parameter_type, date_range, n_obs)
```
```{r}
mapview(nwis_all,
col.regions = c("#E69F00", # light orange
"#009E73", # green
"#F0E442", # yellow
"#56B4E9", # light blue
"#0072B2", # dark blue
"#D55E00"), # orange
zcol = 'site_type',
alpha.regions = 0.5,
alpha = 0.4,
cex = 4,
layer.name = "NWIS Flow Sites")+
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="#56B4E9",
alpha.regions = 0.5,
lwd = 0,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
```
<br>
```{r}
nwis_all %>%
sf::st_drop_geometry() %>%
arrange((UNIT_CODE)) %>%
DT::datatable(filter = "none",
caption = 'NWIS Flow Data',
rownames = FALSE,
fillContainer = T,
options = list(
scrollY = '300px')
)
# kableExtra::kable(format='html', caption = "NWIS Flow Data") %>%
# kableExtra::kable_styling(position='center') %>%
# kableExtra::scroll_box(width='900px',height='500px')
rm(nwis_all)
```
<br>
<br>
### [NPS Aquarius data](https://irma.nps.gov/AQWebPortal/Data/Map)
We can download data within NPS-Aquarius for any park of interest using `getAquarius()`.
```{r}
# aquarius <- getAquarius(park = park, save = T, path = 'data/1_report/all_aquarius.RDS')
aquarius <- readRDS('data/1_report/all_aquarius.RDS') %>%
distinct(.keep_all=TRUE) %>%
mutate(timestamp = lubridate::ymd_hms(timestamp),
year = lubridate::year(timestamp)) %>%
group_by(siteID, Location.Type, parameter, Long, Lat, EPSG) %>%
summarize(n_obs = n(),
range = ifelse(min(year) == max(year),
paste0(min(year)),
paste0(min(year), "-", max(year)))) %>%
mutate(flow_data = ifelse(grepl("velocity|WaterPressure|discharge|flow|height|level|stage|depthtowater",
parameter, ignore.case=T) &! grepl("sediment", parameter, ignore.case = T), "Flow", "Other")) %>%
dplyr::filter(flow_data == "Flow") %>%
select(siteID, Location.Type, parameter, n_obs, range, Lat, Long) %>%
sf::st_as_sf(coords=c("Lat","Long"), crs=4326) %>%
sf::st_transform(st_crs(aoi)) %>%
st_join(select(aoi_split, UNIT_CODE), left = TRUE) %>%
distinct(., .keep_all=TRUE)
```
```{r}
mapview(aquarius,
zcol = 'Location.Type',
col.regions = c("#E69F00", # light orange
"#56B4E9", # light blue
"#009E73", # green
"#F0E442"), # yellow
layer.name = "Aquarius Sites") +
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="#56B4E9",
alpha.regions = 0.33,
lwd = 0,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
```
<br>
```{r}
aquarius %>%
sf::st_drop_geometry() %>%
arrange((UNIT_CODE)) %>%
DT::datatable(filter = "none",
caption = 'Aquarius Continuous Data',
rownames = FALSE,
fillContainer = T,
options = list(
scrollY = '300px')
)
# kableExtra::kable(format='html', caption = "Aquarius Continuous Data") %>%
# kableExtra::kable_styling(position='center') %>%
# kableExtra::scroll_box(width='900px',height='500px')
rm(aquarius)
```
<br>
<br>
### NOAA weather stations
We can download NOAA weather station data within a specified area of interest and selected time period (here, from 1979-2022) using `getNOAA()`.
```{r}
# noaa_all <- getNOAA(aoi = aoi, startDate = "1979-10-01", endDate = "2022-09-30",
# save = T, path = "data/1_report/all_noaa.RDS")
noaa_all <- readRDS('data/1_report/noaa_all.RDS') %>%
mutate(year = lubridate::year(lubridate::as_date(date))) %>%
group_by(id) %>%
summarize(range_1980 = ifelse(min(year) == max(year),
paste0(min(year)),
paste0(min(year), "-", max(year)))) %>%
st_join(select(aoi_split, UNIT_CODE), left = TRUE) %>%
filter(!is.na(UNIT_CODE))
```
```{r}
mapview(noaa_all,
col.regions = "#CC79A7",
cex = 4,
layer.name = "NOAA Weather Stations") +
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="#56B4E9",
alpha.regions = 0.33,
lwd = 0,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
```
<br>
```{r}
noaa_all %>%
sf::st_drop_geometry() %>%
arrange((UNIT_CODE)) %>%
DT::datatable(filter = "none",
caption = 'NOAA Weather Stations',
rownames = FALSE,
fillContainer = T,
options = list(
scrollY = '300px')
)
# kableExtra::kable(format='html', caption = "NOAA Weather Stations") %>%
# kableExtra::kable_styling(position='center') %>%
# kableExtra::scroll_box(width='900px',height='500px')
rm(noaa_all)
```
<br>
<br>
# Gridded/Interpolated Data
### [Principal aquifers](https://water.usgs.gov/GIS/metadata/usgswrd/XML/aquifers_us.xml) in park units
We have explored the option of pulling data associated with *Principal Aquifers of the 48 Conterminous United States, Hawaii, Puerto Rico, and the U.S. Virgin Islands* ([USGS, 2003](https://water.usgs.gov/GIS/metadata/usgswrd/XML/aquifers_us.xml#stdorder)); namely, selecting only aquifers that intersect park boundaries. Because many of these aquifers extend across large swaths of the US, we embedded an option to clip aquifer extents by a chosen distance outside of the park boundary (here, clipping associated aquifer extents to only 33.3 km around the park boundary).
```{r, message=FALSE, warning=FALSE, error=FALSE}
aquifers <- getAquifers(aoi = park_boundary, dist = 0.3)
mapview(aquifers,
zcol = "OBJECTID_1",
col.regions = wesanderson::wes_palette("Zissou1", type = "continuous"),
layer.name= "Aquifers within Park Units",
popup = TRUE,
legend = F) +
mapview(park_boundary,
col.regions = "black",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
rm(aquifers)
```
<br>
<br>
### [Mean Annual Ground-Water Recharge Index](https://water.usgs.gov/GIS/metadata/usgswrd/XML/rech48grd.xml)
We can import the mean annual ground-water recharge index grid (Wollock, 2003) and clip to the area(s) of interest using `getGWRecharge()`.
```{r}
recharge <- getGWRecharge(aoi = aoi, dist = 0)
mapview(recharge[[1]],
col.regions = c("#F21A00","#E1AF00","#EBCC2A","#78B7C5","#3B9AB2"),
layer.name= "Monthly Groundwater Recharge Index") +
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
rm(recharge)
```
<br>
<br>
### Layers from the [NPS Gridded Water Balance Model](http://screenedcleanedsummaries.s3-website-us-west-2.amazonaws.com/)
There are \~1,700 rasters associated with this database. Here we are showing just one, though all (or any combination thereof) of the summary rasters can be downloaded with the `getWBM()` function. However, the workflows linked to customizing outputs for the [*What water balance product do you need?*](https://screenedcleanedsummaries.s3.us-west-2.amazonaws.com/which_water_balance.html) data set requires NPS VPN access.
```{r}
# getWBM(aoi = aoi, type = "PET")
pet_historical <- raster::raster('data/1_report/historical_pet_1980_1999.tif')
mapview(pet_historical,
col.regions = c("#F21A00","#E1AF00","#EBCC2A","#78B7C5","#3B9AB2"),
layer.name= "PET (1980-1999)") +
mapview(park_buffer,
col.regions = "black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(watershed_split,
col.regions="black",
alpha.regions = 0,
lwd = 1,
popup = FALSE,
legend = FALSE,
homebutton = FALSE,
label = FALSE) +
mapview(park_boundary,
col.regions = "#74a089",
alpha.regions = 0,
lwd = 2,
popup = FALSE,
legend = F,
homebutton = FALSE,
label = FALSE)
rm(pet_historical)
```
<br>
<br>
# Miscellaneous Data
### Trends in park visitation
We can import park visitation data using `getVisitation()`.
```{r}
# download visitation data from NPS-Irma STATS
visitor_stats <- getVisitation(type = 'park', units = park,
startYear = 1900, endYear = 2022) %>%
group_by(UnitCode,Year) %>%
summarize(Annual=sum(RecreationVisitors))
```
```{r}
plotly::ggplotly(ggplot() +
geom_point(data = visitor_stats, aes(x = Year, y = Annual, color = UnitCode)) +
geom_line(data = visitor_stats, aes(x = Year, y = Annual, color = UnitCode)) +
scale_color_manual(values = c("#56B4E9", # light blue
"#009E73", # green
"#0072B2")) + # dark blue
theme_bw() +
labs(color="Park Unit") +
ylab("Annual Visitors") +
scale_y_continuous(labels = scales::comma))
rm(visitor_stats)
```