Skip to content

Commit

Permalink
update catalog
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Oct 26, 2024
1 parent d12d413 commit 1b13c2a
Show file tree
Hide file tree
Showing 66 changed files with 651 additions and 592 deletions.
10 changes: 5 additions & 5 deletions catalog/scores/Aquatics/Daily_Chlorophyll_a/collection.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,27 +11,27 @@
{
"rel": "item",
"type": "application/json",
"href": "./models/climatology.json"
"href": "./models/tg_tbats.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/persistenceRW.json"
"href": "./models/climatology.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_arima.json"
"href": "./models/persistenceRW.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_ets.json"
"href": "./models/tg_arima.json"
},
{
"rel": "item",
"type": "application/json",
"href": "./models/tg_tbats.json"
"href": "./models/tg_ets.json"
},
{
"rel": "parent",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "climatology",
"description": "All scores for the Daily_Chlorophyll_a variable for the climatology model. Information for the model is provided as follows: Historical DOY mean and sd. Assumes normal distribution.\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRPO, SUGG, TOMB, PRLA, TOOK.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-07-03",
"updated": "2024-07-04",
"start_datetime": "2024-06-13T00:00:00Z",
"end_datetime": "2024-08-07T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -226,7 +227,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3://bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=climatology\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "persistenceRW",
"description": "All scores for the Daily_Chlorophyll_a variable for the persistenceRW model. Information for the model is provided as follows: Random walk from the fable package with ensembles used to represent uncertainty.\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO, SUGG, TOMB, TOOK.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-07-03",
"updated": "2024-07-04",
"start_datetime": "2024-06-13T00:00:00Z",
"end_datetime": "2024-08-06T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -226,7 +227,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3://bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=persistenceRW\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "tg_arima",
"description": "All scores for the Daily_Chlorophyll_a variable for the tg_arima model. Information for the model is provided as follows: The tg_arima model is an AutoRegressive Integrated Moving Average (ARIMA) model fit using\nthe function auto.arima() from the forecast package in R (Hyndman et al. 2023; Hyndman et al., 2008).\nThis is an empirical time series model with no covariates.\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO, SUGG, TOMB, TOOK.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-07-03",
"updated": "2024-07-04",
"start_datetime": "2024-06-13T00:00:00Z",
"end_datetime": "2024-08-02T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -226,7 +227,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_arima?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_arima?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_arima?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3://bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_arima\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@
"properties": {
"title": "tg_ets",
"description": "All scores for the Daily_Chlorophyll_a variable for the tg_ets model. Information for the model is provided as follows: The tg_ets model is an Error, Trend, Seasonal (ETS) model fit using the function ets() from the\nforecast package in R (Hyndman et al. 2023; Hyndman et al., 2008). This is an empirical time series\nmodel with no covariates..\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO, SUGG, TOMB, TOOK.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-10-23T00:00:00Z",
"datetime": "2024-07-03",
"updated": "2024-07-04",
"start_datetime": "2024-06-13T00:00:00Z",
"end_datetime": "2024-08-02T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -226,7 +227,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_ets?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_ets?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_ets?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3://bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_ets\"\ncon.read_parquet(path + \"/**\")"
}
}
}
23 changes: 12 additions & 11 deletions catalog/scores/Aquatics/Daily_Chlorophyll_a/models/tg_tbats.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,23 @@
"geometry": {
"type": "MultiPoint",
"coordinates": [
[-82.0177, 29.6878],
[-88.1589, 31.8534],
[-149.6106, 68.6307],
[-82.0084, 29.676],
[-87.7982, 32.5415],
[-89.4737, 46.2097],
[-84.4374, 31.1854],
[-89.7048, 45.9983],
[-99.1139, 47.1591],
[-99.2531, 47.1298],
[-82.0177, 29.6878],
[-88.1589, 31.8534],
[-149.6106, 68.6307]
[-99.2531, 47.1298]
]
},
"properties": {
"title": "tg_tbats",
"description": "All scores for the Daily_Chlorophyll_a variable for the tg_tbats model. Information for the model is provided as follows: The tg_tbats model is a TBATS (Trigonometric seasonality, Box-Cox transformation, ARMA\nerrors, Trend and Seasonal components) model fit using the function tbats() from the forecast package in\nR (Hyndman et al. 2023; Hyndman et al., 2008). This is an empirical time series model with no\ncovariates..\n The model predicts this variable at the following sites: BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO, SUGG, TOMB, TOOK.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-10-23T00:00:00Z",
"description": "All scores for the Daily_Chlorophyll_a variable for the tg_tbats model. Information for the model is provided as follows: The tg_tbats model is a TBATS (Trigonometric seasonality, Box-Cox transformation, ARMA\nerrors, Trend and Seasonal components) model fit using the function tbats() from the forecast package in\nR (Hyndman et al. 2023; Hyndman et al., 2008). This is an empirical time series model with no\ncovariates..\n The model predicts this variable at the following sites: SUGG, TOMB, TOOK, BARC, BLWA, CRAM, FLNT, LIRO, PRLA, PRPO.\n Scores are metrics that describe how well forecasts compare to observations. The scores catalog includes are summaries of the forecasts (i.e., mean, median, confidence intervals), matched observations (if available), and scores (metrics of how well the model distribution compares to observations)",
"datetime": "2024-07-03",
"updated": "2024-07-04",
"start_datetime": "2024-06-13T00:00:00Z",
"end_datetime": "2024-08-02T00:00:00Z",
"providers": [
Expand Down Expand Up @@ -55,16 +56,16 @@
"chla",
"Daily",
"P1D",
"SUGG",
"TOMB",
"TOOK",
"BARC",
"BLWA",
"CRAM",
"FLNT",
"LIRO",
"PRLA",
"PRPO",
"SUGG",
"TOMB",
"TOOK"
"PRPO"
],
"table:columns": [
{
Expand Down Expand Up @@ -226,7 +227,7 @@
"type": "application/x-parquet",
"title": "Database Access for Daily Chlorophyll_a",
"href": "s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_tbats?endpoint_override=sdsc.osn.xsede.org",
"description": "Use `arrow` for remote access to the database. This R code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_tbats?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n"
"description": "Use `R` or `Python` code for remote access to the database. This code will return results for this variable and model combination.\n\n### R\n\n```{r}\n# Use code below\n\nall_results <- arrow::open_dataset(\"s3://anonymous@bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_tbats?endpoint_override=sdsc.osn.xsede.org\")\ndf <- all_results |> dplyr::collect()\n\n```\n \n\nYou can use dplyr operations before calling `dplyr::collect()` to `summarise`, `select` columns, and/or `filter` rows prior to pulling the data into a local `data.frame`. Reducing the data that is pulled locally will speed up the data download speed and reduce your memory usage.\n\n\n### Python\n\n```# Use code below\n\n\nimport ibis\n\n con = ibis.duckdbf.connect()\n\n\n con.raw_sql(f'''\n\n CREATE OR REPLACE SECRET secret (\n\n TYPE S3,\n\n ENDPOINT 'sdsc.osn.xsede.org',\n\n URL_STYLE 'path'\n\n\n );/n\n '''\n\n\n path = \"s3://bio230014-bucket01/challenges/scores/bundled-parquet//project_id=neon4cast/duration=P1D/variable=chla/model_id=tg_tbats\"\ncon.read_parquet(path + \"/**\")"
}
}
}
Loading

0 comments on commit 1b13c2a

Please sign in to comment.