Skip to content

Commit

Permalink
update catalog
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Oct 27, 2024
1 parent 1b13c2a commit cfdc553
Show file tree
Hide file tree
Showing 176 changed files with 1,288 additions and 1,118 deletions.
30 changes: 15 additions & 15 deletions catalog/summaries/Aquatics/Daily_Chlorophyll_a/collection.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,6 @@
],
"type": "Collection",
"links": [
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_tbats.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_temp_lm.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_temp_lm_all_sites.json"
},
{
"rel": "item",
"type": "application/json",
Expand Down Expand Up @@ -113,6 +98,21 @@
"type": "application/json",
"href": "./models/tg_randfor.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_tbats.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_temp_lm.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_temp_lm_all_sites.json"
},
{
"rel": "parent",
"type": "application/json",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@
"properties": {
"title": "USGSHABs1",
"description": "All summaries for the Daily_Chlorophyll_a variable for the USGSHABs1 model. Information for the model is provided as follows: NA.\n The model predicts this variable at the following sites: BLWA, TOMB, FLNT.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-02-05",
"updated": "2024-07-02",
"start_datetime": "2023-11-12T00:00:00Z",
"end_datetime": "2024-03-09T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -197,7 +198,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=USGSHABs1?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=USGSHABs1?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=USGSHABs1?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=USGSHABs1\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
"properties": {
"title": "cb_prophet",
"description": "All summaries for the Daily_Chlorophyll_a variable for the cb_prophet model. Information for the model is provided as follows: The Prophet model is an empirical model, specifically a non-linear regression model that includes\nseasonality effects (Taylor & Letham, 2018). The model relies on Bayesian estimation with an additive\nwhite noise error term.\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO, SUGG, TOMB.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-02-06",
"updated": "2024-02-07",
"start_datetime": "2023-11-14T00:00:00Z",
"end_datetime": "2024-03-10T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -209,7 +210,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=cb_prophet?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=cb_prophet?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=cb_prophet?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=cb_prophet\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "climatology",
"description": "All summaries for the Daily_Chlorophyll_a variable for the climatology model. Information for the model is provided as follows: Historical DOY mean and sd. Assumes normal distribution.\n The model predicts this variable at the following sites: BARC, BLWA, FLNT, SUGG, TOMB, CRAM, LIRO, PRPO, PRLA, TOOK, USGS-01427510, USGS-01463500, USGS-05543010, USGS-05553700, USGS-05558300, USGS-05586300, USGS-14181500, USGS-14211010, USGS-14211720.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-08-22",
"updated": "2024-08-23",
"start_datetime": "2023-01-02T00:00:00Z",
"end_datetime": "2024-09-26T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -220,7 +221,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "persistenceRW",
"description": "All summaries for the Daily_Chlorophyll_a variable for the persistenceRW model. Information for the model is provided as follows: Random walk from the fable package with ensembles used to represent uncertainty.\n The model predicts this variable at the following sites: LIRO, PRLA, PRPO, SUGG, TOMB, TOOK, BARC, BLWA, CRAM, FLNT.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-08-22",
"updated": "2024-08-23",
"start_datetime": "2023-11-15T00:00:00Z",
"end_datetime": "2024-09-25T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -211,7 +212,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@
"properties": {
"title": "procBlanchardMonod",
"description": "All summaries for the Daily_Chlorophyll_a variable for the procBlanchardMonod model. Information for the model is provided as follows: NA.\n The model predicts this variable at the following sites: BARC, CRAM, LIRO, PRLA, PRPO, SUGG, TOOK.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-02-05",
"updated": "2024-02-07",
"start_datetime": "2023-11-13T00:00:00Z",
"end_datetime": "2024-03-06T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -205,7 +206,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procBlanchardMonod?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procBlanchardMonod?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procBlanchardMonod?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=procBlanchardMonod\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@
"properties": {
"title": "procCTMIMonod",
"description": "All summaries for the Daily_Chlorophyll_a variable for the procCTMIMonod model. Information for the model is provided as follows: NA.\n The model predicts this variable at the following sites: BARC, CRAM, LIRO, PRLA, PRPO, SUGG, TOOK.\n Summaries are the forecasts statistics of the raw forecasts (i.e., mean, median, confidence intervals)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-02-05",
"updated": "2024-02-07",
"start_datetime": "2023-11-13T00:00:00Z",
"end_datetime": "2024-03-06T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -205,7 +206,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procCTMIMonod?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procCTMIMonod?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@/project_id=neon4cast/duration=P1D/variable=chla/model_id=procCTMIMonod?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3:///project_id=neon4cast/duration=P1D/variable=chla/model_id=procCTMIMonod\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Loading

0 comments on commit cfdc553

Please sign in to comment.